 Now, this is the upstream for our advanced cluster management product. And this is being bootstrapped right now and donated to the CNCF. So go visit it, join the community, and give it a star. But this is simplified fleet management. And it's key for managing all of your clusters. There's a bunch of cool things under the hood for cluster provisioning. It's through a project called OpenShift Hive. Now that's another open source project. It provides a framework for doing governance and compliance tasks, so delivering your policies down to your fleet, auditing your fleets, making sure they're in compliance with the policies and all other kinds of security and compliance needs. Another thing you need to do is actually deploy out to your fleet, your applications. So that's another key part of this project. You can do dynamic placement and other policies around your applications. So replicated between under the hood, it uses projects like Argo CD, which we'll talk more about later, Open Policy Agent, which we'll also talk about more later, and Thanos for metrics and observability. Now another great thing that's happening upstream is all the work with the cluster API. This is a project out of a working group in the Kubernetes community that says it's designed to fill the gaps and tools like for like Kube-ADM. So Kube-ADM is a CLI for bootstrapping Kube clusters. And it turned out to be not as declarative as you need for doing infrastructure as code workflows. So if you want to describe an entire cluster and maybe stamp it out 20, 30, 100 times, you weren't really able to do that with Kube-ADM. So this is going to, this wraps around Kube-ADM and provides a better tool for that. Let's see. So right now in OpenShift for multi-cluster management, a lot of that upstream work is being baked right into OpenShift. First, there's a cluster creation in ACM, so Advanced Cluster Management. Today you can boot new clusters that will automatically inherit all the role-based access control governance security policies that you have across your entire fleet. If it sees a new cluster shows up, it automatically applies all of that policy. And you could also manage the full lifecycle of all your OpenShift clusters, so that software upgrades, scaling out the cluster, managing other things like that. Another really cool feature is cluster pools. So just like we have pools of machines, worker machines, you can have pools of clusters that you can claim. So if you're doing some quality testing and you need a quick cluster, you can go claim it, use it, tear it down, destroy it, all without having to wait for it to be installed. So that's a huge time saver and it's really cool. On the monitoring front, it's really important to have application deployed across your clusters, have the big picture. So it's important to know what's going on, how many resources are you using, have I claimed too many resources, am I using too many that are reserved and not actually in use? So we have built-in dashboards for that. See the networking arena, so this is really key. You have a front end that needs to talk to a database on a different cluster or you've scaled out the database across two different clusters and they need to talk to each other. So extending that networking link is also key. The advanced cluster management can help you expand that pod network across the encrypted link. And another really cool thing is all your pod IPs continue to work, right? So it looks like Kubernetes does on a single cluster but it's on multiple clusters and it can be a ton of clusters. So that's really cool. And the CNCF Submariner project is the backing technology here. So go check that out too, if you wanna dive further in. All right, quick roadmap for multi-cluster. Some key items. So the new cluster switcher, you can move in the UI. You can move between seeing all your clusters and then dive further into a single cluster if you need to debug or change configuration. And some other things that are coming out in the ACM world just continuing to mature. And the shared SSO is something that can be very easily configured once you have access to all your fleet management tools. So we're looking into that as well as built in default policies for governance and risk. So things like CIS compliance can be built into ACM and the Submariner multi-cluster networking that we talked about. And there are test workflows for that right now. So if you wanna dive in and go look at that working on getting that to GA, but definitely if you have feedback, please let us know. All right, that's a really quick overview of the multi-cluster world. And of course, if you're online ask Rob questions, he would love that. All right, now let's dive into security. All right, pod security. Pod security has been deprecated and it was deprecated and it's being replaced in Kubernetes 125. Or at least that's the target removal. OpenShift still supports, just wanna make sure everybody's still clear, still supports security context constraints. So even though we're talking about upstream, just wanna, that's not going away, but if you are a partner or your clusters depend on pod security policy, start looking into pod security. The future replacement for pod security policy is much, much simpler. So if you're used to more complex policies, you'll want to look into Open Policy Agent or OPA or Kyverno. So OPA right now is supported in ACM through a plugin. See something else that's going on upstream where it intersects Kubernetes and Linux is the username spaces. So username spaces, that's something that works with SC Linux and it protects the container context on the operating system, on the operating system node. So this is an intersection of that. It's actually a CRI, so it's a container runtime interface feature that we already use in OpenShift. But the Kubernetes community is gonna plumb that all the way down into Kubernetes. And when that happens, OpenShift will start using it automatically. What that allows you to do is user ID mapping. So if you have root inside your container, it doesn't mean that you have root outside or root on the host. So upstream, this enhancement proposal is still in progress and so we hope to use that as soon as it lands. All right, so we talked about advanced cluster management and now let's talk about advanced cluster security. Sure, you've seen all the announcements on ACS. So advanced cluster management and advanced cluster security together with OpenShift. This is a offering called OpenShift Platform Plus if you've already heard a lot about that. So OpenShift here in the middle, it comes with an immutable operating system. So with all the default security policies and automated operations, but now you have a number of different personas and so we need additional tools to help out all of them. So starting on the left. Depends on what screen I'm looking at. Starting with advanced cluster security. So you have your security on your operations folks, your SecOps if you will. Those folks have a very focused role. They're looking at threats that are happening right now in the cluster. So real time security incidents as well as automated things like scanning for compliance and vulnerabilities, auditing your network policies. They can do all of that through advanced cluster security and then affect the number of OpenShift clusters across the entire fleet with your standard set of policies. All right, now where advanced cluster management comes into play is some of your other personas. You have developers that are deploying applications and they want to store their applications as code and they can deploy all of that through advanced cluster management so that you have exactly what's happening in production at any moment in time. Now your DevOps team might be changing some cluster configuration at the same time and they want to have that also managed as code through security for security. And so going through the pipeline of changing around some of the default config on the cluster, that can happen through that pipeline as well. All orchestrated through advanced cluster management. And then your security folks, they may want to affect some of those same cluster configs and they can do that through advanced cluster management as well as deploy out our Quay registry so developers can scan those things at build time. So one of the cool things is you can scan your containers at build and ACM and then scan it again at run time as you're going through ACS, okay. Let's look at some of what's happened on our security roadmap. Mentioned compliance a few times. So we have a compliance operator that can do CIS benchmarks and standards like that. We'd like to build that into the OpenShift console. Right now it's available in the ACS UI. So you can already look at that. But we'd like to expand that even further. All right, and if you've been tracking Jetstack's SIRT Manager project, it's a really popular tool for generating certificates for web applications. We are productizing that and bringing that into OpenShift and we're doing it in a unique way. It's going to be automated certificate management for all the users in your cluster but it's going to do something kind of interesting. Where you can issue certs from a number of different places. So you can issue certs from the internal cluster CA which is what OpenShift uses right now. Or you can do it from your hashi court vault or let's encrypt. So let's encrypt is more for web applications. So it's one of the traditional use cases for cert management. Now from those two you can actually issue certs to a number of different places and probably the main thing is going to be developer applications. And those are running on your cluster but they can be used for operators that are installed on your cluster. And if you have operators doing webhooks, if you need certificates from them, they can also, you can roll those through this project as well as other Red Hat products like middleware. All right. OpenShift sandbox containers. So this right now is available in tech preview for 4.9. So that we're working towards GA-ing that and if you have feedback on your sandbox containers, again, reach out. And this lets you run your third party or your untrusted code and it's designed for your applications that are cloud-native. So they're already in your containers and they need a little bit of extra isolation and you wrap all of that in this extra kernel. So it's just a bit safer. And we hope to get that FIPS certified by the second half of next year. And FIPS, sure, we all know, but it's a US government standard, security standard. All right, last, username spaces. We talked about this already but once this lands upstream, we wanna bring it out of the box into OpenShift. So for all the applications that are running in OpenShift and this is especially helpful for OpenShift builds. And if you wanna use the Quay registry, by design, they are third party untrusted code and are just by nature and we wanna protect those as much as possible. So further focus on security. All right, that is what we have for security. Let's talk about automation. All right, we talked about platform level and management level automation earlier. What's also being driven across OpenShift is workload and development automation and standardization of how your applications are delivered automatically through your workflows. Now there's a lot of innovation that is happening upstream. It's not on this slide but what I really wanna mention is the GitOps working group where Christian is right now. I haven't seen a few, ah, there he is, thanks. So Christian was just there but the GitOps working group is a, it's a very active community and just kind of exemplifies what upstream is and just multiple companies coming together to focus on GitOps and anyway, I wanna mention and that's through the application delivery technical advisory group. So if you wanna get involved, go join that. So Argo CD is one of the most popular projects in this area and that's the upstream for OpenShift GitOps and Argo right now, I mean, there's continual support for Helm customize other tools that are really popular and consolidating those features into the user interface. Previously and still now there's all the different interfaces that you have to switch between and now they're being consolidated. So another thing, so for example, since customize 4.2 has been pulled in, now you can specify that Helm should include CRDs when inflating a chart, so that's cool. Argo CD has also moved to project scope for depositors and clusters and what this means is that it makes it easier for developers to continue working without having to reach out to your cluster admin or needing your global configs. And another key enhancement, I didn't put it on here is the application sets, that's part of Argo CD. I know Christian is very excited about application sets. So and with application sets, you can create, modify and manage multiple applications through your templated automation. So previously you can only do that through a single repo or namespace. All right, Tecton, Tecton is the upstream for OpenShift Pipelines. It's continuing to gain maturity with pipeline as code. Teams can configure your builds, your tests and deployment and code that's trackable and stored in the central repo. And with this continued focus on DevSecOps, so security running theme. So there's support for your rootless images and your experimental hermetic execution mode. So that removes the networking so that you can go ahead and test it without worrying about, it just isolates it more and makes it more secure. All right, one last note on Tecton, you're also doing a lot of work on advanced error handling and making it easier to debug your pipelines if something happens. So again, that maturity for the project. Cata, Cata is really interesting. So this is Kubernetes event driven autoscaling. So that's what Cata stands for. This is event aware autoscaling. So currently with autoscaling with your HPA, it's focused on CPU, memory. However, Cata has this concept of scalers. And what this means is that now you have different triggers such as that you can set up such as a SQL query. So that's cool. Or a stream or how many messages you have in your queue. So you have these different triggers that you can set up to scale your application. Also exposing your cloud events. So cloud events, this is, it's a specification out of the serverless working group. So that's another CNCF working group. And that allows you to set up, it's a specification defining a common standard for a common format for your event data. So that's really helpful in creating all the scalers. So there's a lot of work being done there. Cata is being productized into OpenShift and that's gonna not land for a little bit, but we're doing a lot of work upstream on that as well. All right, Knative, I'm gonna make sure we have time for Christian's awesome demo. Knative is the upstream for OpenShift serverless. And the Knative, the OpenShift serverless team really drives a lot of work upstream. So there's so much going on. So Knative functions, the OpenShift serverless team donated all the work being done on functions to Knative and to the Knative sandbox. They're also driving a creation of a functions working group. So if you're interested in serverless functions go get involved upstream. And also they're putting a lot of effort into there's a function repository directory that has all kinds of runtimes and templates. And then let's see, Apache Kafka. The eventing team has done a lot of integration work with Apache Kafka upstream. So there's so much going on integrating those as well. And for serving, I talked earlier about the end-to-end encryption, so driving a lot of work upstream on encrypting all hops through your cluster. There's a lot of cold start improvements too. So all right, so what's happening right now? Again, I wanna talk about so workload automation as well as cluster automation. Workload automation, all that work being done upstream for your GitOps and your pipelines that's all coming downstream. Also integration work with advanced cluster management and a lot of off cluster automation. So more integration with Ansible. Also talked about the scaling. Dynamic scaling using Kata is the backing technology and under the covers it uses the horizontal pod autoscaler. So puts a wrap around that and by default the pod autoscaler uses CPU and memory utilization to autoscale and Kata just expands on that. Cluster automation. All right, we talked about multi-cluster management and just wanna highlight all the different areas where automation is being driven across the platform. So automation through advanced cluster management to manage your entire fleet through ACM. And right now, definitely wanna highlight you can manage up to 1,000 clusters in a single hub and that's just amazing. That's a lot of clusters. So started this by saying that we see that a lot of customers are using 10, 20, 100, it's even up to 1,000. So and even more, they're testing further from that. And wanna also highlight that by design OpenShift 4 is designed to automate all your operations and operators themselves are designed to automate your workloads, your day two operations. Sorry. All right, automation roadmap. And then we can get to Christian's amazing demo. No pressure though. All right, some highlights on the automation roadmap. All right, again, continuing to build on all the work being done upstream for Tecton and GitOps or I go see in Tecton. All right, and integrating further with Tecton Hub so it's easier to pull the workloads in or the workflows in from Tecton Hub and a lot more pipelines as code use cases. Something really exciting will be the GA of OpenShift builds V2 and build packs for pipelines. So move in from OpenShift builds to V2 and of course the sandbox containers and pipelines. So there's so much happening and so much that's going to land. Exciting times. Serverless. Again, that Kata integration talked about Kata before and that will be probably more in the, looking at 4.11 and beyond timeframe to bring that into OpenShift serverless but it's on the horizon and that'll be K-native and Kata compliment each other. So that'll be great bringing that into OpenShift serverless. Advanced cluster management. So I mentioned a thousand clusters. They're gonna be doing even more testing and improving the performance and scale of managing across the entire fleet. There's so much more I could say but I wanna make sure we have time for your demo. All right, so we talked about multicluster, the multicluster layer and each of the layers and Christian, if you wanna come show us what you have prepared. Let's go. Is it, oh, it's on. There we go. You want me to switch it on? Yeah, is it yours? I don't know. Okay. Can you hear me? Good. Thumbs up. Oh, look at that. You guys have a monitor here. So this is really cool by the way. This is like the first time I've seen this in any conference. So let me quickly set up here because I actually wanna mirror my desktop and not just, hopefully this doesn't cause a kernel panic. All right, does it look good? Should I make it a little bigger? Maybe just a scotch. Okay, there we go. So actually, by the way, it's great to see everyone here. It's a little awkward to be back at conferences after so long of not being at conferences and seeing everyone in 3D is actually really cool. It's a little weird where I guess we'll get over the awkwardness as the week goes and it'll be really cool. Actually, a quick shout out to Grish. Grish is someone I've worked at for a long time. This is a demo sort of halfway inspired by some of the talks he's had with his customers where we talk about how OpenShift, ACS, ACM, it's all better together, right? So I'm kind of go through a workflow to talk a little bit about how you can integrate ACS and all its own functionalities into OpenShift and into ACM and into pipelines to kind of just see how you can have that security integrated all in. So here I have ACM, right? So Advanced Cluster Manager. I'm supposed to say Red Hat Advanced Cluster Manager, sorry. And so here I have a list of my clusters. Again, it has like, if you worked with ACM before you should see that there's my local cluster. I keep forgetting I have the monitor but I'm so used to looking this way. As you can see, I have the local, this is the whole local hub cluster. There's a little test cluster here. This is actually what I really think is really cool about ACM is that this actually test cluster is in my home lab behind a firewall and ACM is still managing it. So for those who have like things like disconnected clusters or their gap clusters, ACM can still work in a model where you can still have that secured, right? So this is literally, it's a server sitting under my desk. But I'm also managing this cluster called Cluster2. And I can, from here it gives me certain information about the cluster. What version that there's an upgrade available that there's nine nodes in this cluster is actually pretty big because you'll see why in a second. Looks like there's an issue being identified here. That wasn't there yesterday, so I won't click on that. And then you can actually go to the cluster here itself. And this is OpenShift, one of the managed clusters here. There we go. And part of this installation is that I'm actually running ACS, right? So just like anything with OpenShift, just like the entry point for anything in OpenShift, is the operator hub. You go to the operator hub, it's there, right? And so this is advanced cluster manager. I've already pre-installed it, pre-set it up here. And I kind of want to go through just a little bit about ACS in general, right? So see here, this is a dashboard at first glance. You can see that I have some system violations, right? I have zero critical. I have 165 high. I have all of these at a glance. I can see what's going on here. I can see my top riskiest deployment. And you have to kind of keep in mind with security, especially with ACS. This is all relative, right? So risky, it just means relative to what it finds, right? So I have, since I have zero critical, risky doesn't necessarily mean critical, it just means these are the top offenders of what you have here. And it shows you the list of those deployments. I'd like to take a look. This is my favorite page, by the way, of ACS. When I first started working with ACS, I'm like, I wish I had this back in OpenShift 3 and OpenShift 2 sort of thing. Like seriously, right? So here you can see the top violation is that someone accessed a secret. This secret just happens to have the QBadmin password. So it raised that as a violation, right? And right now it's set up just to note it, right? You can set it to either block it, or you can set it to fire off alert to page of duty, right? Hey, someone accessed the secret, you know, it accessed it multiple times. This is not a big deal because it was me accessing the secret. But this is actually really cool. This is probably like one of my favorite things is like, I wish I had a way to just it notify me when someone's accessing something in my cluster. Some of the vulnerability management here. Yeah, there we go. So this gives you an information of the top riskiest deployments, right? So here there is an application called price list that Jason, I don't know if he's around. He helped me build it a long, long time ago and he made fun of me because I misspelled purse list when I first built it. But anyway, this tells me the image that I got scanned. There is the top riskiest components here. It tells you that which CVE and if whether it's fixable or not. So basically you can have information that tells you, hey, you need to rebuild this image. You need to make sure this image is updated. You can go back to your developers and say, hey, we found these vulnerabilities. I'm not gonna, we're not gonna deploy this out. It could be as simple as here. There are a bunch of RPMs. So it's just as simple as a DNF update on the image. But this is like my top riskiest one because I think I built this like a year, two years ago. Not sure. One last thing is the network diagram which I also really, really like as well. You can see the network flows. For example, here, I wanna choose, okay dude. So this shows the network flows. It shows what is connecting to what. Here, there's an ingress network flow. You can do things like list network policies and you can actually do things like assimilate a network policy. So you can see what's gonna block what without it actually doing it. And you can actually do that workflow here. So you can actually see at a glance how everything is connected to everything else and what is allowing access to what. As he is, let me bring back that page. This one right here is like, it says, these workflows here, it's kind of anomalous. Why? Because it's wide open to everyone, right? So the stack rocks right away, sorry, ACS. ACS right away tells you that, hey, this traffic is wide open to everyone and you may wanna take a look at it. And then you can simulate again in the network policy simulator. So this is all great. This is all great tool for an admin, right? Great, great tool for security practitioners or having containerized workloads coming in. But really what I wanna show is that how you can integrate it in your pipelines, right? So, you know, Karina was talking about Argo and Tecton, that's kind of like the world I'm in right now. Here I have an application, right? That's deployed with Argo CD. Oh, by the way, if I go back to ACS, actually ACS sees that it's an Argo application, right? So it actually, there's that integration there. So just really quick, but back to the demo app. So there's a demo application here that is built using pipelines and deployed using Argo CD. If I go to the, here we go. The pipelines here. So I have a pipeline here, this pipeline built that application and deployed that application. But part of the building process I've integrated with ACS to where it'll run the security checks while it's building and it'll either block or it'll allow the build to continue fully to deployment run. As you see here, I built it initially, it went through. I've added a few more security checks to the ACS integration, right? So we, because we want to see it fail, right? So let's start this build process. So this build process kind of kicks off, takes a little bit because if you've worked with pipelines that actually goes and grabs a persistent volume first as a workspace. So you can use it as a workspace and do the git cloning right here. So there is a, there's a git clone that happens. There's a deployment check, meaning there's that security scan that happens using ACS. And then the deployment happens afterwards. So let me put the logs here. So this takes a little bit, there it goes. So it does the git clone. And as you can see here, let's see if I can expand this, make it a little bigger. We do all this with UIs and in the end, we just like terminals, right? So here, as you can see, it went by really quick. Looks like I have a bunch of violations. The initial violation, I guess for my name space, there was no violations because that's all that was sent in the initial run. And it looks like I have some CVEs that I need to fix. So there's like the version of curl, the version of busybox, right? And various other violations. So it sets the overall status to fail. So once it fails, here I go back to the details page, then I go back to the pipelines runs. It didn't actually finish and it didn't actually deploy. It stopped, right? It stopped short of deploying the application. As you see here, it's still on that same version. And this is how you can use ACS within your pipelines to kind of stop this at the source, right? Just like literally, we're talking about the Git clone does a scan, it could do an image scan, it can do policy scans from your YAML file, right? So like if you're doing a GitOps workflow and you're storing your YAML and Git, it can actually scan those and make sure it's to compliance, right? Compliance meaning whatever you set, the rule sets that you set for your environment. So that's the, I think I'm almost at time. Yeah, good timing. So that's it from a high level. They can show how you can not only use ACM to deploy multiple clusters, but you can also use ACS to manage the policies on those clusters. And then also integrate your pipelines from a developer standpoint, from a developer workflow. You can, developers can then be notified early and often when a violation exists or whether they need to update, not have it further down, right? You don't want that further down in your process where you're delaying a deployment of your application because there's security violation and they have to rebuild and redo the whole process. You scan that early and often and be alerted early and often and have it fully integrated into one platform. So yeah, with that, thank you very much. I'm not sure who's up next, Stu. He's coming here, yep. Thank you. Thanks, Christian. So while Clayton's coming up, if anybody has a quick question or two for Christian, we can take it for those of you in Hoppin where we'll totally do. So I'm gonna run down here. It's a question. Question on the ACS scanning, right? So where do we define any scoring system there so that if you wanted to pass that, you know, move on to the failures, yeah. So where do you define that? Yeah, so the question being is that if you want it to scan, but if you want it to pass, even though it's scan, so depending, I assume depending on the environment you're in, right? Like you may be in development, you wanna go forward. So you actually define that on the pipeline side, funnily enough, it gives you a status. The ACS integration gives you a status, right? So this is standard UNIX status zero one, right? Zero meaning it passed, one meaning it failed. So you can keep going on the status. It's a simple if statement, right? Like if it failed, keep going. So you do that on the pipeline side, on the integration side. So there's no scoring or anything like that if it goes, say. Actually, yes, so there is. So I don't know if you heard about the, there's a scoring, yes. So the actual output is the JSON format. So you can go, you can actually take a look at the JSON and look for whatever score or whatever specific thing that you're looking for and then act upon that. But yes, there's information coming through about the scan via the JSON. I've got a question from Keith over here on the left side, Keith. So for ACS, what under the hood, what's it using when you show the network flows? Is it doing something like Kiali or what's it doing under the hood? That I'm not familiar with what is doing that. So I know, I know it's not Kiali. I know that, I just don't know what it is. Thank you, Christian. If you wouldn't mind going on Hoppin for a couple of minutes if you have any questions on there, but thank you everyone for the questions here and we gotta get to Clayton. So Christian's gonna be here and if we could definitely follow up, but I'm gonna hand it over to Clayton Coleman. Thank you. So good morning everyone. Thank you for being brave enough and courageous enough and feeling safe enough, hopefully to show up in person. This is my first conference since the world changed and so I'm actually pretty excited although I suspect it's gonna take us a while to feel like this is totally normal. So I appreciate you showing up today and hopefully what I'll talk about is something that you could have found any other way, but being at Openship Commons, being at KubeCon this is an important time for us because we're starting to think about what comes next. As architect at Red Hat, I've been involved in Openshift since the very beginning and I've been with Kubernetes since the beginning of that project. We've learned and achieved a huge amount. Karina was working through all of the things that have come in Openshift after 20 releases of Openshift 3 and 4 I guess, adding together 20 releases of Kubernetes 21, I don't know, I can't count anymore. Over 20 releases probably approximately of iteration in the Kubernetes community. It's been customer success, it's been community success, ecosystem success, but as part of my job, it's to think about the things that aren't working. What are the gaps? Where are the places that we can do better? So the talk is Kubernetes as the control plane for the hybrid cloud. If you attended KubeCon virtually in May, Joe Fernandes and I gave a little bit of a teaser for this and I also gave a keynote where I talked about some of the ideas that lead into this, but now that we've had an extra three or four months to work through those, I wanna talk a little bit about the broader context and try to tie it together for you in a way that even though this is still very early for us, thinking about what comes next is an important part of my job and so I'd like to share it with you. So there's a bunch of problems that we all face and there's problems that we can fix by adding something. The additive approach to adding a feature or adding a new component to OpenShift or adding a new technology that accelerates a particular part of our workflows, but there's certain problems that you cannot fix just by adding one more thing. Well, actually, no, that's not true because what you can do is you can build a layer on top and so the layer on top is what I'm gonna talk about today but I wanna describe what I mean first by layers. So over the last 25 years at Red Hat, we've been involved in all of the technologies and areas that you see up on this list. This is roughly chronological in terms of each of these was an abstraction over something that was previously a problem that did not exist or simplified a problem in a way that allowed us to reach another level. There's a lot of commonalities between these. Open Source drove every single one of these or is a fundamental part about how we reach it. Most of these are key parts of our lives today. Obviously, if you're here at KubeCon, I hope Kubernetes is a key part of your life. But the common factor that I think I wanted to reinforce is all of these to some degree or another have a purpose and the purpose is helping us run, build and sustain applications. And applications is a really generic way of saying the stuff that we do that either drives business value or creates value for others but doesn't necessarily drive any business value for ourselves. It might be the things that we have to run for regulatory requirements. It might be the things that we choose to run in our personal labs or in our homes. And each of these, this is a little bit of a red hat centric view. And I chose it specifically because at each of these layers, the open source communities either built on around or were in integral to the evolution of these. And open source is a fundamental part of every technological advance for the last 25 years. And so at KubeCon, which is about celebrating, celebrating all of us working on our problems together, the problem that we're roughly trying to do is to get better at building and sustaining applications. So when we talk about another layer, we're kind of asking, well, what could come next? What might come next? Have we been successful at Kubernetes? So this is, if you're not familiar with it, this is the CNCF landscape chart. I had to shrink it to make it show up on one page. There's point, we all make plenty of, we get laughs about how complex this diagram is. But one way that I like to prefer to look at this is, these are solutions to real problems. We may not all agree on which the actual fix should be. We might say, oh, no, no, no, that technology is not for me. I'm much more interested in this other technology. But some of us, all of us, have contributed to the breadth of this ecosystem. And every one of these technologies or tools or certified platforms or service is something that helps us build and run applications. And all of these are possible because of the layers below us. We're able to deal with more. We're able to solve more problems. A great example, which you can just barely see because again, this is almost too big to fit on even a high resolution laptop, observability. Observability has been a huge open source transformation in the last five to 10 years. There were capabilities before that that made observability easy. If you were willing to pay lots of money and you were using a particular infrastructure, observability wasn't a new concept. But the idea of observing everything, of getting deep in the details and all the way up the stack across tens of thousands of applications or services across multiple different sets of hardware, that's something that we're still coming to terms with. That problem is a consequence of how successful we have been at building and running more. And so ultimately, every time we go up a level of abstraction, we're adding more. We're adding more things to deal with. We have more technologies we need to understand or integrate. And so I like to think that a part of my job, and I suspect a lot of folks in this room's job, the folks at KubeCon's job is, to come up with an answer to help deal with this problem, which is for the last 25 years, our answer has been, we wanna do more, we're building more, we have to handle more, we have to manage more. And the yes please more of everything is I think one of the most succinct definitions that I've heard of for the phrase hybrid cloud. Hybrid cloud is the reality because the reality is is there's too much of anything at too many layers for any of us to really understand or to control or to even predict what we're going to need next. And so hybrid cloud, another way of describing hybrid cloud is hybrid cloud is the problem, which is there's too much of everything. How do we bring it together in a way that makes sense that what's the simplifying assumption? What's the simplifying abstraction that takes everything on that CNCF landscape chart, boils it down to something that we can all appreciate and address without worrying too much about all of the details all at once. So if that's the problem, how does it connect to our day to day? So I talked to customers and community members, to partners, to technologists, to individuals I listen to what the product organization hears from customers. I listen to what customers say about the product organization's priorities. And I look at the places where what we're trying to do is move above Kubernetes. We're all above Kubernetes. Karina in that very first section, we talked about the breadth of the things that exist above Kubernetes. And that basically boils down I think to three questions I hear the most that all of us end up answering on top of the latest layer on the stack. How do I go beyond the limits of one? Whether that's one cluster, which as Karina said, we all have done or almost everyone that I know of who's deploying Kubernetes at scale is running more than one of. Sometimes, not everybody, but sometimes people say, well, how do I go beyond one cloud? Not necessarily because they planned to, but because they acquired someone or they got a good deal or someone made a strategic directional change or someone got a better cost opportunity. Or it turns out that half of the stuff that you didn't even know about was actually already running on one of those other clouds. It might be questions like, how do I go beyond the limits of a region? We're just, Kubernetes fundamentally is a technology that's designed to solve a simple problem, a close co-located set of computers running software in a manner that hides the details of those individual machines. But that doesn't give you a solution for what happens when you want to run an application across seven geographies at once. What happens when your workload follows the sun? What happens when your workload has to deal with a large failure? And so these are questions that we all answer and we're all coming up with approaches. The second question, how do you integrate? You integrate more, obviously the cloud native landscape was about new technologies, but there's also services, whether those are the, we think about one of the biggest transformations in the last 10 years as well, has been delegating to others to do things on our behalf through an API. Infrastructure is code, API is, or infrastructure as API, asking someone to take on a portion of the problem to give them that responsibility so that you don't have to worry about it so you can accomplish more. And so there are more services than ever before that we integrate where someone's taken on the responsibility for keeping that running, it's our problem. We do this internally, we do this within organizations. Vendors increasingly form, are moving from package software to managed services. This transition will only accelerate because again, another way to deal with complexity at scale is to put the problem in a box and talk about it from the outside. And finally, we have more teams. Every one of those applications is run by somebody, those teams turnover, those teams have to pass on knowledge, they have to be educated. All of us at some level are trying to deal with the complexity that we've created as we're creating additional business value. So the solutions for these problems are out there. OpenShift as part of our ecosystem focuses on things that we can stand behind but we can't stand behind everything. There's a huge ecosystem of partners and vendors here at KubeCon today who track and solve problems. Open source communities are even larger than that. People scratching, taking a problem that they know how to solve and solving it in a way that others can consume. In fact, the set of patterns that we use, nothing is wrong with the approach that we've taken which is we all solve our individual problems. And at some point, all of us end up solving very similar problems. We talk about CICD, continuous integration and continuous deployment. We talk about doing that across clusters. We might bring abstractions in like service mesh which has been one of the hottest topics. Observability allows us, there's disagreements about how we do this, different use cases, different requirements lead us in different directions. But at a certain point, and this is I think one of the strengths of open source as a fundamental way of thinking about how this is, open source is relentless because it's all of us trying to solve the problems we face. It doesn't always mean that someone else has solved that problem already or that the way that they've solved it applies to my problem. Or maybe I can't approach it or maybe I don't even know about it about the fact that someone else solved the same problem as me. And so open source is, I like to think of it as a bit of a relentless way but we all batter, bash our heads into the wall and write some code and share it with the world. We move a little bit forward. We try all the paths and occasionally, some smart person somewhere will, they'll get ahead of everybody else and be like, ah, CICD, this is brilliant. I'll start a lecture circuit and make lots of money on sharing my knowledge with others. I would be a private vendor who gets a great idea but that's just one smart person or 10 smart people or 100 smart people. For every one of those people, there were a thousand others who were also smart and also solving that problem. They just don't necessarily have the time to focus on that. So as a group, both here in this room, in the OpenShift community and in KubeCon, we're always working on common problems and trying to move them forward. And we've been doing this long enough and CNCF charts, I think cloud native in general kind of speaks that is we can start to ask what are those common problems that all of us are trying to solve that if we stepped back, looked them at the same way and said, you know, actually our problems are the same. If we can simplify, if we could agree that maybe we don't get everything we want or if we can pick 90% of the things that we all agree on, could we arrange those together into something that Streamlines unifies, aligns? And the answer may be no, maybe we have to wait for another smart person to come up. But as part of my job, I sit and I sit all day and I go, what are we all doing that's very, very similar that maybe just maybe if we bring two or three or five or 15 separate problems together, they actually all are one problem. And if we simplify that problem, we might be able to make and share that simplification. Well, it's just like every layer of the stack has done before us. So describing the problem, we talked about layers, problems that can't be solved by just by adding more. So imagine an abstraction that's application-centric. Kubernetes is application-centric. Linux is application-centric. What are the things with Kubernetes that allowed us to move forward? Well, one of them was, and this is when I talk about things in common is, I hear this, you know, there's Kubernetes ads complexity. There has to be a payoff for Kubernetes to be worth it for you to adopt it. What is that payoff? I think the most obvious payoff, but it's always good. You know, it's not always obvious to everyone who's new to the ecosystem. And after we work on Kubernetes for a while, we forget about it, which is for the most part, most of us were deploying chunks of Linux applications and some Windows applications into environments onto machines and we were trying to roll them out, give them access or give, expose those to external users over a network, connect those up to other applications, perform rollouts and survive machine failures. We were all doing that differently and very smart people before Kubernetes had solutions for it, but we were all for the most part in the ecosystem individually iterating on the approaches. And that didn't make us wrong that just made us inefficient because we were focused on solving problems that somebody else or the set of us already kind of had dealt with. If we could share that, if we could put that consistency in place, we wouldn't ever have to solve that problem again or hopefully we wouldn't have to solve that problem. That let us move up to a higher level, let us focus on what do we actually need to do? Well, we want to deliver, you know, hundreds of thousands of cloud native applications. We're bringing new geographically distributed capabilities. We're supporting global services. We need 24 seven uptime. We need to be able to survive failures of data centers, regions, geographies, or sometimes we just had tens of thousands of developers who increasingly were solving different problems that all needed to come together. And we just had to make sense of all of that. And Kubernetes gave us a way of standardizing a part of the problem. So what are the kinds of standardizations that would be useful? I don't have a complete answer for this. In fact, I'll stand up. We talked about hybrid cloud being the problem too much of everything. I'm going to talk about one possible approach that feels right to me, looking at the problems that we're all facing. And it starts incrementally. It has to build off what we knew. So I'm going to give three examples. And these are going to be pretty basic examples. You might say, Clayton, that's not obvious at all. Go back to the drawing board, you should be smarter. And that's okay. And I'm going to frame it as a constraint, an opportunity or a superpower and a consequence of that. And I'll talk a little bit about, you know, how we're thinking about this and how you can get involved. So I think the fundamental constraint is that standing here, this is kind of a no-duh kind of thing. If I stand here at KubeCon and I say, hey, we need to do something new, go rewrite everything. You're all going to laugh at me, right? In order to give that benefit, I would have to give you a 10x improvement in some area, right? Generally the rule of thumb is nobody's going to change anything unless the benefit is so obvious it is staring you in the face. And the bigger the hill you have to climb, the bigger the benefit. So I definitely know that in this room, if I told you you had to throw away Kubernetes, you'd probably be like, we don't love Kubernetes. You know, like there's some problems with it. We're all totally psyched to go out and rebuild everything we do. That's kind of a non-starter. So this is a very useful constraint because it eliminates a huge class of possible approaches. And it's incremental, which is if I can promise you that you can take most of your applications and get the benefit just by adding one thing, I am adding one thing to the ecosystem. The question is, is the benefit going to be worth it? And does that help simplify the problems come after? So you're going to laugh at how simplistic this diagram is. I'm trying to boil this down to the very essence because, again, we don't know exactly what the future looks like. But one of the things that I like to think about is, if there's one problem that everybody in Kubernetes agrees with, it's that if you want to keep something truly isolated, and truly isolated means to the utmost level of paranoia, you have to give them their own space. And there's a couple of reasons for that. One actually is that Kubernetes is a collaborative environment. It was not designed for hard isolation at every level. It tries to, and OpenShift has been relatively successful at giving you isolation on some levels. Whether that's inside of containers, Karina mentioned sandbox containers, things like running containers inside of VMs inside of containers, which gets awfully complex. But creating that layer, that separation, some part of Kubernetes is fundamentally cooperative, collaborative. If you don't like what someone's doing on the cluster, you can catch it using ACS or other technologies. And that's a part of the price we pay because we brought together the problem and standardized it. So I'm going to start by saying, if we could tease apart different teams, but the problem is that if you give every team a cluster, you just end up with too many clusters. That's not what Kubernetes was about. Kubernetes was not about running tens of thousands of clusters. Maybe some people benefit if you run tens of thousands of clusters, but I think we can do better. So at a very high level, we would all agree that teams like the things that a cluster provides and they want that isolation. We want that isolation between teams. You know, if team A screws up, you don't want team B to pay the consequences. So thinking about an abstraction, let's imagine just for the sake of argument, a control plane, an abstract layer that sits between an application team that thinks it's using Kubernetes and an actual Kubernetes cluster. This comes with a whole host of abstraction challenges. We've actually, many people in the ecosystem have built or tried things like this. Every time you use getups, you're basically working in a mindset that uses this. Argo emulates parts of this pattern. ACM emulates parts of the pattern. We're all building similar tools. Can we take the idea, strip it down to its bare essence and look at it from a different lens? This is something that gives us a new opportunity. So if you have someone who thinks they're on a cluster, well, obviously they're gonna say, okay, well, I want to cube control apply my service and my deployment. I want to apply a Helm chart. I want to use getups. I want to see ICD process. I expect somewhere a container runs. So obviously as part of this constraint, we would have to take a workload that someone specifies at a fairly high level and make sure it runs. But the best outcome would be the team doesn't actually want to care about what a cluster is. Because I think, and this is something I think all of us fall victim to from a, we are technologists. We think about, I have a technology, I will solve a problem with it. When we come to KubeCon, we show icons of technologies, but the point isn't the technology. The technology is a tool that gets us to the next step. What's an abstraction that takes away the cluster from Kubernetes? Because the cluster isn't the point, the applications are the point. So when I say I want to run a service and deployment, I don't care what machine that runs on, the vast majority of the time. Why do I care what cluster it runs on? Could we make Kubernetes clusterless, right? How do we take and separate the high level problem? I expect a service to be running somewhere from the low level problem. I've got to go maintain and run a cluster. Someone's still going to be doing that, right? At the end of the day, there's still a physical machine running a bit of software. But we're kind of getting to the point where I just don't care about that problem anymore. That's a low level detail that someone can abstract for me. And if we do this right, and again, like this is a constraint, we don't necessarily know that we can do this in a perfect fashion. But if I can start with a service and deployment and not care what cluster it runs on, and to me as a user, I don't see the difference, then all my tools still work. My user experiences work. And yeah, there may be some tweaks. Nothing is free. You're going to have to adapt. But if I can keep that idea isolated, we get something we don't have today, which is when a cluster fails, bad things happen. And again, all of us, to some degree, have to deal with this approach. We build solutions that let us say which cluster an application runs to. I suspect many people using a getups flow or CI CD flow, you've got a mapping somewhere that says this application goes to this cluster. These aren't new ideas. But we also had those ideas when we were just talking about machines and software moving across them. What is the common pattern that we could all agree on for taking an application and putting it on a cluster? And when it moves, we don't care. How do we make that part of our normal operations? A superpower that Kubernetes gave development teams is the ability to delete a pod or to stop a machine and verify, hey, my app kept working. You could do it before. But now all of us have a common vocabulary for dealing with a failure of one machine. How would we get the superpower of not caring about a cluster, a geography? A region, a cloud, half of our data center infrastructure at a key vendor. How do we build the abstractions that let us test, tolerate and try failure on a much larger scale? How do we go through that in a common way so that when you add a new person to your team, they don't have to learn your way of doing application mobility. But they come in with a common understanding of us, a common problem that probably isn't, absolutely isn't going to be the perfect solution to the problem, but it's a common problem. It's a common solution. And that's, again, what open source is about, is about grinding down all of the problems we have into these nice smooth pebbles that we can all fit together and move on to solving our real problems. So in this example, this level of abstraction is really about trying to hide the fact that there's two different clusters underneath. Sometimes you can't hide that, right? At the end of the day, we're still running software somewhere, but if we as a group, as an ecosystem, as a community are doing this the same way, that becomes the center of gravity. It becomes the path well traveled. It becomes the downhill path that gets better and better and better. If you can test cluster failure day in, day out, if deployment looks like the same process, if rollout looks like the same process as a cluster failing, think about the options that give us. And there are real constraints. In a cluster, there may be hardware that's only available on a certain machine. Your app's not running if you don't have access to that hardware. That's the part of your application that's specific that is not infrastructure agnostic. But for all the other parts, for all the other bits of the application, we don't wanna focus on how our applications are special. Or actually, no, I take that back. We do wanna focus on how our applications are special and let all of the other problems become in problems. So something special is when an application isn't cloud agnostic or cluster agnostic, when where something's running matters, how do we break those into smaller and smaller problems so that at a high level, we say, this is an application, I don't care where it runs. I'd be willing to wager that's about 97, 98% of all of the applications that run on Kubernetes today. The other 2%, which judging by talking with OpenShift customers is 90% of OpenShift customers, there's always something special. I need hardware. I'm running network infrastructure. I'm the backbone of an important payment processor and this has to run in a specific regulatory area. Those are the unique characteristics of the app. Not what cluster runs on. A consequence of this abstraction is we actually have to start thinking about how applications connect with each other. We actually do this today. There's at least 15 or 30 or 75 service mesh projects out there that talk about the abstractions that help us link up services together in larger organizations because that's what we do is we build upon and connect and integrate these different bits of technology. The characteristics of a service mesh, every one of us approaches the problem slightly differently because we have slightly different requirements. Service mesh is an organizational construct. It is the ability to scale services and applications across a wide enough group that it all doesn't just collapse into a flaming pile of ashes. The technology helps us, but unless we address, at the same time we address the service mesh problem, the organizational problem, policy, control, placement, we're unable to truly solve the whole problem. In fact, a lot of times when I look at the information we gather about how people are using service mesh, the biggest problem isn't the service mesh. It's how they get and pick a pattern that that service mesh is used. This is an area where if we can consolidate and come together, we can find common approaches that allow us to not just use the technologies for cross cluster, but figure out the common way that most applications across all footprints should run. And again, we don't know all of the details here. In fact, I'll be quite honest. Is this the future of Kubernetes, a diagram like this that lets you have a layer that you can run different types of applications with different sets of APIs with integrations on the side that picks a winner or a set of winners or has a pluggable layer for all of these different technologies like identity, service, interconnectivity, networking, storage, backup, data consolidation, data duplication, the ability to represent the data of your application distinct from where it runs. I don't know. We're just in the beginnings of this process, but I will say based on what I've heard and based on the people I've talked to, we are absolutely all solving the same problems. What I would like to do, and what Red Hat is very interested in doing is working with customers, partners, users, this community to try and find the commonalities. And, you know, Diane is probably jumping up and down because I keep saying the things that we have in common, which is the motto of OpenShift Commons, is finding the points of how we work together that we're ready to say not just this is a technology that we can use, but this is a pattern that we all agree on that simplifies, standardizes, and aligns where we're going. And so we're very early in this process. In fact, it's really too early to even pretend like I'm gonna talk about a roadmap. But what we are doing is we're prototyping these ideas in the open. We're talking about them here in the community as part of our one-on-one. We're trying to work concretely the list of features that Karina showed around multi-cluster. Each of those capabilities is part of a discussion with customers, partners, and users around what problems are they trying to solve. We're trying to work through and offer those incremental capabilities to give you the options where you might diverge so that we can use that as input to the next process, which is how do we converge? And so I've added two email addresses up here. Rob Zumski is a PM who is helping coordinate the early parts of what we talk about here. The Kubernetes is the control plane for our hybrid cloud. The project, the prototype that we have is called KCP, cube-like control plane, although we're very cagey about whether that's what it actually means. But if you are interested in these ideas, if you have use cases, if you think that your problem is a problem you'd like to see another layer that feels cube-like, that standardizes those, we wanna gather feedback over the next year. This is gonna be a very active area of investigation for us. So I hope you'll come along for the ride and I hope that the ideas and concepts here are ideas you can take and ask questions of us of, why aren't you helping us converge? And why aren't you helping us simplify what you're doing? So thank you very much. I hope you all have a great KubeCon. And if you would like to talk about any of these ideas, I will be here all week. Thank you. Clayton, we do have a question here for you. Max? Yeah, hi. How is that different from the Federation work? So, Federation is, this is, I was hoping someone would ask this question. Federation was a great idea that was too early. We've been working on Federation since before V1 of Kubernetes. I think what we've, we recognize the problem. I think we have new tools, new opportunities and lessons learned that will help us evolve. We're not necessarily trying to solve exactly the problems of Federation, but we've seen enough people address Federation differently that we have a reasonable suspicion and I can go into some details later. We have a reasonable suspicion that some of the elements that we were missing was everybody in the community having the problem that something like Federation could solve. And so there's always the case of being too early because you don't have enough consumers. I would say everyone who has multi-clusters, multiple clusters today is looking for things that simplify that story. Federation and the ideas from Federation and the ideas that have continued to evolve in the community around Federation, there's many different approaches. I think it's time for us to come back to that. So it's almost, I think the time for ideas like Federation is now and a lot of these approaches draw from the experiences that we had early on and the challenges and the limitations of Kubernetes, of our ecosystem and of our technology stack and what people were ready to adopt. And so I think we're better positioned now to approach the problem in our meaningful way. I think this is looking at it from a different lens. There's a lot of things that Federation didn't try to solve that may have been required, such as the ability for different teams to have different sets of APIs, how to evolve APIs across, if you have tens of thousands of teams all using an API and you change that, you've now broken 10,000 people. That's a concept that we had no way to even begin to address in the early days of Federation. I think that's some of where we're going is those higher level APIs, those requirements, those use cases are gonna be more obvious now and will help us guide where we want to go. So Federation is an inspiration, but it is not the limit for sure. All right, thank you so much Clayton. All right, for our friends watching online, we're actually going to cut you to Paris to watch a discussion. We've had an online viewing party there. For those of us in the room, we actually do have a break until 11 o'clock. A couple of quick things. Number one is, if you haven't already, there's a little swag table in the back. Feel free to grab some stuff on the way out, on the way in. We're gonna restart at 11 o'clock. You will not wanna miss DivineOps herself, Sasha. It will be here and feel free, a bunch of the speakers will be here if you wanna ask some questions and everything and we'll be back at 11, thank you. And then trying to get our folks from Paris to join in. And it's taken a little bit of doing. And I'm seeing in the chat, someone's having a problem with my bitly link for Clayton Slide. So as soon as this chat is over, I will go back and fix that link. And we have with us here, Duane, you are muted. If you can unmute yourself and let's see if we can get Yasin in here and give a shout out there. But while we're doing that, I'm just gonna share some slides here just to give, ah, there is Yasin. There is Yasin. There we go. Woo-hoo. Hi guys. Whoa! Woo-hoo! All right. Well, hello everybody from Paris. How's it going over there, Yasin? Yeah, pretty good, pretty good. And thanks for this talk, that's excellent. Community here is very exciting. All right. Well, we are thrilled to have all of you here. I can see a bunch of people that I sort of recognize except the masks are criticizing you all. So thank you for taking the risk and being courageous and brave and joining in the meetup. It is totally thrilling to have you here with us today. I hope you're enjoying what we're doing so far. And this is a great example of a really vibrant OpenShift meetup. And if you guys who are watching in Hopin or they're in person in the Expo hall here, there is a booth that Dewan is managing today to get you hooked up to some of the other meetups around the world. So Dewan, tell us a little bit about where our meetups can be found. Absolutely. And again, thanks everyone for joining whichever drinks you have. I have my team Hortons here for you. So thanks again for joining. I'm gonna quickly share my screen to tell you a bit more about the meetups we have. So if you can, then let me know if you can see my screen. I can see it, perfect. Awesome, awesome. So this is our dashboard where we can see total number of events and total number of members. But these are only the OpenShift groups that we manage as part of the parent OpenShift group. If you go to meetup.com slash topic slash OpenShift, you can see actually the true number of members in this community, over 155 groups in different parts of the world. So not all of those are part of the parent meetup group. So the point is wherever you are, you can probably find an OpenShift meetup group near you and whether it's regarding OpenShift, whether regarding an open source project that you love, you are able to contribute as a speaker, as a host or just to be there and listen. So really looking forward to see you in one of those meetups. And if you have any question, please let me know. My name is Dewan Ahamed. You can also find me on the expo booth. But yeah, just hoping that you can join one of these meetup groups. Yeah, and thanks Dewan. And Dewan has been managing for the past year all of the OpenShift meetups. And it has really been a wonderful experience to see people shifting from live in-person events to virtual, to hybrid, and to having you guys join us here today from Paris. We also have some folks in Kuala Lumpur that are joining us today. Then we'll give them a shout out later on this afternoon during the lightning talks. But there's a ton of wonderful, great content coming up today as Sasha Rosenbaum who you might know as Divine Ops and Audrey Resnick, our resident data scientist are up next and we'll be having some great talks from end users from Broadcom, from TIA, from MarketAmericaShop.com and the Electronic Training Alliance shortly after the lunch break. And I know you guys in France will probably be going to sleep around that time, hopefully, or drinking a little bit of wine, I hope. I've got my monster drink in the fridge just for the later part. But all of it's also being streamed live on YouTube and you can replay it at your leisure when you're awake tomorrow and all of the content will be up there live as well. The other thing, I'm just gonna share my screen for a half a second and keep you in. And Chrome tab. Sign the welcome slides. Let's see if this works and give it a second to think about it. I'm gonna encourage all of you out there, whether you're in-person or not, join Open... Wanna get in our Slack channels and get some connections to your peers and create some connections to some of the Red Hatters who are here today. Please do scan this little QR code and make sure that you join and we'll get you on board as quickly as possible. A lot of you are already part of member organizations. It's an organizational-based membership. So if you haven't already joined, if you go into the participants list and you see your name there, still fill out the join form because that gets you pushed right into the Slack channel a little bit later. So take a moment for that. I know some of you in Paris, we didn't manage to ship any books over there, but there'll be a book signing today at the lunch hour for the gathering. And then again at there, and Jason Dovey's is up here. Some of you may have seen, there's a Kubernetes boot camp that's gonna start at noon our time, Pacific time. You have to have a special ticket for that. So go back in and re-register if you bought a general admission ticket. If you wanna go to that, there are still a few seats left. And also over the time of KubeCon, we are gonna have a number of wonderful office hours, including one of my favorite things, the OKD, the open source side of OpenShift working groups, office hours, which is always fun. And I hear Christian Glombeck's gonna be talking a little bit about what's coming for ARM and OKD. So that's gonna be a hot topic for us along with code-ready containers for OKD and Charo Groover. There's some more live office hours that are gonna be happening. If you're in person and in the room, Josh Berkist is the person to chat with because he's been the organizer behind that. Thanks so much, Josh, for taking that on. And if you're in person, there is a Red Hat job social at some place called the Million Dollar Theater, which sounds like fun. And you're welcome to join us and have a treat and a coffee. And I'm sure they'll give you a swag because they're wonderful about that. But really, I just wanna shout out and thank you all for joining us here today. It's totally amazing thing to be able to do something like this, totally live with the virtual audience joining in and for watch parties around the world. I'm so grateful for all of our end user talks who are coming up right after the noon hour, for them for sharing their stories and for helping us make the community as strong as it is worldwide. So, again, if you're into the OpenShift meetups, join Duane in the, in the, oops, there we go. Where was I? And now I'm live again in the back room. There I am. Hey, Paris, now I can see you again. Shout out to Paris for being there and taking it on and joining us there. Love seeing you all, even if I can't see you through the masks. Totally grateful for all of you joining us. Join OpenShift Commons. Take a trip over into, I think it's this way if I'm pointing or maybe that way if I'm pointing to the expo hall and join Duane and he can get you hooked up if you're somewhere else in the world. I have seen some amazing places, Indonesia, Chile, folks all around the world that are joining us virtually. Pretty much if there is not an OpenShift meetup near you, watch out because Duane will get you to organize one. And so we'll do everything we can do to do that. And I will post the QR code or the link rather than the QR code in the chat for joining again. So I'll let you all go grab a drink, a toast of wine or a cup of coffee or if you're like me, a Red Bull or a monster drink. And let's- I quickly wanted to give a shout out to Yasin, Sabi and all the organizers. You can't imagine how hard it is to pull all this together. So a huge shout out, a huge thanks to Yasin, Sabi and all the hosts who worked tirelessly to bring this into reality. Yeah. Thanks to you guys. Thank you. Thank you. Thank you very much. Absolutely. Thanks to each and every one of you. And I would be totally remiss if I didn't thank my in-person host, Stu Miniman, who hopefully he's grabbing a cup of coffee but he knows how near and dear he is to my heart for making the in-person and Los Angeles stuff rock and to Natalie Pazamo who's doing the event management there and for all of the other folks that have been there and especially all of our partners and Michael Waite, please if you're around hang out for the lightning talks. They're gonna be hilarious and there'll be some fun raffles for swag from the partners. And it's gonna be a long day but it's gonna be totally worth your time to hang out and do that. So a shout out to everybody there in the audience. Thank you Clayton for that wonderful talk and Karina for doing those wonderful release update. If you missed Karina's release update, rewind on YouTube and you can watch that. And thank you to Christian Hernandez for disappearing from the GitOps day there. The other co-located thing and coming over and running that wonderful ACM GitOps-ish demo. So I am incredibly grateful for the wonderful team that's backing us all up there in person. And Timothy shout out to you and the AV crew for making this happen. So right now I'm gonna let you all go on break and take it back to the room and we'll make this the wonderfulness of hybrid virtual live events. It's a first for all of us and I think this might be the new normal. So hopefully the next time it'll be maskless and we'll all be there. But I think we'll always be doing our OpenShift Commons gatherings with a virtual component because it is a global community and we really wanna help you guys connect to each other. It's all about making those connections between peers and between Red Haters and customers and partners across the entire community. So thanks again to everybody in Paris, you rock and we'll talk to you all soon. Hopefully we'll get to visit you sometime soon. Take care, Yassine. Back to the live streaming. All right guys, take care, Duane. See you in the Expo hall. Yeah, I'll go back to Expo, bye. And so that's why I'm excited to be at this conference today and be talking to you all and maybe we'll come up with new ideas. That's why I'm excited to be at this conference today Cool. Maintenance, right? And we took down servers for planned maintenance for whole weekends. To think that's been a reliability aren't you still need an automation? And so companies like Puppet and Chef, Enansible, we're starting to build that automation for sort of your own data center, right? And so I think we're gonna be able to make sure that we're able to be able to make sure that we're able to sort of your own data center, right? You still need an automation. And so companies like Puppet and Chef, Enansible, we're starting to build that automation for sort of your own data center, right? You still need an automation. Check one, two, one, two. All right. No, nothing, nothing on Natalie's. So right now that's what's on the screen there. So if you go to what we need, there's nothing on, don't look at this, there's that in there. Hello? All right, awesome. Okay, I'm standing between you guys and lunch. My speaker notes didn't come up, so I'm gonna wing this so that we can get done on time and everybody can eat. So good morning, my name is Audrey Resnick. I'm a senior principal software engineer and data scientist with the Red Hat OpenShift OpenShift data sciences team. And I'm gonna talk to you about what's the deal with managed services and model delivery. So if you're a data scientist or if you work with a data scientist supporting them, you'll know that when they create the model, there's a lot more than to creating the model. You wanna be able to get data to the model, you wanna be able to deploy the model, monitor it. So we're gonna go into some of those items. We'll take a look at a model's role in an intelligent application, kind of get into what managed services are. Sasha thankfully went ahead and covered some of that, so I don't have to go into detail for that, thank you. We'll take a look at who uses managed services and surprisingly it's not just the data scientist when we're talking about intelligent application creation. And then we'll kinda look at how managed services help you along with the model delivery, where you find them. And finally, I'm gonna click through a very quick demo of the Red Hat OpenShift data science platform and its managed services that are available. So when we take a look at intelligent applications, we have to take a look at the model's role in them. Now intelligent applications by themselves are not just one small thing, they are a distributed system. So there are things that work in conjunction all the way from data verification, serving the infrastructure, going ahead and doing some configuration. And when we go ahead and take a look at these intelligent applications, we'll see that the model code is just a very, very small part of that. And the model code or the model itself has to be able to make its way through these distributed infrastructure. So it has to have a way to interact with things such as feature extraction. It has to be able to interact with some of the analysis tools. And you look at that and go, wow, that could be like really complicated. That could be like kind of hard. So is there a way to easily create a model and be able to use this thing called managed services? Well, there is. At Red Hat, we went ahead and we created something called the Red Hat OpenShift Data Sciences Platform. And I'm gonna go into detail with that. But that's just an easier way that we can go ahead and help the data scientists be able to go ahead and deliver their model to monitor it. So when we take a look at managed services, we can divide them into four groups or four categories. And within those four categories, we actually will have a number of personas that are going to interact with them. So if we take a look at the first category, we want to go ahead and gather and prepare the data. So that means we're gonna look at data storage, data lakes, data warehousing, stream processing. And it's really our data engineers that are gonna get totally excited about this category of managed services. Then we go ahead into actually developing the model. So when we go ahead and develop the model, we're gonna bring the data scientists in and they're going to go ahead and create the model, work with the algorithms that they need to solve the particular business problem that they're trying to solve. I just wanna mention that you'll notice at the very bottom, we have IT operations keeping an eye on everybody. We'll get back to that. Then we want to be able to deploy a model in an application. We can get in with CI CD pipelines and that's where you're going to have your application developer or machine learning engineer helping out. And then finally, we want to look at model monitoring and management. So we want to see if that model that we've deployed has any drift. Is it giving us some of the answers that we thought we were going to get or do we have to correct it and retrain it? And that's where both the data scientists and the application developer or machine learning engineer will come into conjunction. Now, having all of these services and having them available for everybody can actually be a nightmare for IT operations, right? You wanna give your users the latest bells and whistles, but at the same time, you want some sort of platform or you want some sort of services that you know you can be, how do I say it, very comfortable with, you can depend on those and you know that they're not going to help create any outages when they're actually trying to help create a model. So let's go and take a look at kind of the model life cycle and where these managed services fit in. Now, remember I told you there were four, we wanted to kind of extract and transform the data. So we can actually go ahead and instead of building something ourselves with the Red Hat OpenShift Data Science team, we said, well, wouldn't that be cool if we could just go ahead and invite a whole bunch of different vendors, open source vendors in, so that that way you have a lot of choice. So when you're extracting and transforming the data, yeah, you could go ahead and use Apache Kafka Streams to go ahead and pull in some of your data, but wouldn't it also be cool to use somebody like Starburst Galaxy so that you could go ahead and curate your data? You really wanna unlock that value of your data by making it very fast and easy for you to be able to access that data across the hybrid cloud. Next, we want to take a look at creating models. So we want to be able to either use a Jupyter Notebook or something like that for some exploration, but maybe at the end of the day, we're really interested in what Anaconda has to offer because they might have an extensive set of data science packages or libraries that we could use in our Jupyter Notebook projects when we're going ahead and doing some of the experimentation. Some experimentation we're coming up next. So another one of our internet service vendors or IS fees is IBM Watson Studio. So when you're doing this experimentation, you can go ahead and use IBM Watson Studio to see if you can manage your AI models at scale and be able to see if there are any issues when you're trying to deploy them at scale. Now, when you're going ahead and you're done with your model and you're testing and everything done with your experimentation, what you want to do is actually go ahead and deploy those models as actual services. So you can use an ISV such as sell and deploy and it's going to really help you simplify and accelerate the process of deploying and managing your machine learning models. And finally, when you get your model out there, you want to be able to make sure that that model is performing optimally. So you want to be able to monitor that model performance and you want to be able to glean any meaningful analytics from the model. Now, this whole path that you're seeing, this curvy path is kind of the model operation lifecycle. I want you to keep that in mind because we need to see where would these data services or managed services actually live. So we're going to start with the Red Hat managed cloud platform. We want to have a platform that is very stable that will allow us to work not only from a hybrid cloud but say on-prem public and even to various edge devices. Whoops. Next, we'll have Red Hat managed cloud services. So these are the cloud services that we provide to our customers and you'll notice that kind of in the center there, I don't know if my mouse will work. We have the Red Hat OpenShift data science platform. Now on top of that, we have what we call our ISV managed cloud services. So these are internet service vendors such as Starburst Galaxy or Anaconda that we have into our Red Hat OpenShift data science platform so that you can use some of the OpenShift services that they have. And then we have customer managed ISV software. So if you wanted to say, for instance, in a model to take a look at quantization or go and take a look at inferencing, you could use Intel OpenVINO, which I apologize, I forgot to put the icon up there. You can use sell and deploy. Now, remember I told you we had those four categories. Those four categories kind of sit on top there. So you have gathering and preparing the data. You have developing the model, integrating the models in an application development and model monitoring and management. And of course, you're gonna be able to retrain the models. So this whole Red Hat OpenShift data science offering actually sits on AWS. So it's a cloud offering right now. And what I'm going to do is probably go through a demo. I think I have enough time for that. At least I won't have to have it as a live demo. But one thing that I wanted to mention about this entire platform is we have the depth and scale basically without lock in. So the capabilities that we have are really in conjunction with Red Hat and our service partners that we brought into this ecosystem. So that way you have this managed cloud platform. You can use the Red Hat portfolio and services and you can take advantage of open source products through our partner ecosystem. Okay, this is gonna be very quick. I'm glad I'm gonna be clicking through it. So one of my colleagues actually worked with the London City Metro and the London City Metro wanted to go ahead and to be able to monitor cars within the metro area. They wanted to be able to recognize license plates and see is that car able to park here? Does that car actually have a tag so that it can use these certain metro ways? Is this car containing somebody who did something bad that we want to track? Okay, you get that idea. So here I have a picture of a car. What we're hoping that the machine learning model will do is be able to take that license plate to write it and to actually grab the plate numbers. And then once we get those plate numbers, we can use Apache Kafka to go ahead and store that information, possibly get an amber alert. In the meantime, we'll be pulling a lot of those license plates into our various warehouses and also into our vehicle registration database. And of course, the City of London, their Metro services can then perform more business analytics on the data that we've gleaned. So what does this look like if we are trying to use this Red Hat OpenShift dedicated platform? Well, because the Red Hat OpenShift dedicated platform sits on top of AWS, you are going to need a cluster to actually use Red Hat OpenShift data science or what we affectionately called ROADS. I know I'm going to hell for giving you the acronym there. So we're going to click on the ROADS menu option. And then what you're going to see is basically a menu of the managed services. And of course, there will be managed software available for use to actually work with. In the background, you'll see that I've chosen one of those items that is Jupyter Hub. Notice the other ones when I go ahead and hit the explore icon, these are all the different managed services that I can go ahead and choose to use. And there's plenty of documentation, there are tutorials, there are quick starts so that if you want to learn more about the products and you're not very familiar with them, you can go ahead and use those utilities. So I'm going to go ahead and choose Jupyter Hub. And what that's going to allow me to do is to go into a Jupyter notebook image. I'm going to take this notebook image and then basically wrap it up in a container so that I can deploy it on OpenShift. But I want to customize it for myself. So the first thing that I'm going to do is say, okay, if I'm doing some machine learning, what am I going to be working with? Am I going to be using just a standard data science package which may contain things like NumPy or Pandas or Scikit Learn? Or am I going to want to work with something such as this license plate detection where I may have to use a lot of the PyTorch libraries that are available? So I'm going to click on PyTorch. Then I'm going to take a look, what, sorry, on TensorFlow. And then I'm going to choose a container size. I'm going to choose a large container size and this is also going to give me the ability to choose the type of RAM and CPU that I want to use. If I'm working with S3 storage, I have the ability then to go ahead and enter the credentials because we don't want to go ahead and put in the access keys and whatnot into our actual code. I know people that do that, it's fun to hack into their code, but we don't want to do that. So we can use these environment variables to save that for you. And then we're going to start the server and we're going to have that server spin up. And then we're going to pop right into JupyterLab. So we've given the data scientists now the ability to have an environment where they can go in and create their work. What they probably want to do is to clone a repository that already exists. In this case, we're going to be cloning the license plate workshop. And then when we go ahead and clone that, we will see all of the files that are associated with that. What we'll do is we'll do what a data scientist does. We'll go ahead and start experimenting within that notebook and see if we can actually create a model that will be able to successfully extract that license plate and to actually tell us that it's been successful and extracting that license plate. Now, if I want to go ahead and deploy this application, you're not going to deploy this application as a Jupyter notebook. I know people who have done that, please don't do that. What we want to do is really package the model as an API. In this case, we're going to use Flask to help us accomplish this. Then we'll go ahead and launch our server to see if we've been able to successfully deploy something internally. Then we'll test that Flask app. We have a status of okay. And now we're ready to go back into our OpenShift dedicated environment where we first launched the Rhodes platform from to see if we can actually go ahead and deploy this now on OpenShift. So we're going to create a project for ourselves in OpenShift dedicated. And then, yes, you guys have been good. You've been checking in your code as you've been working on your model. So we even know that everything within the get repository is perfect. Sometimes this can be very hard for a data scientist because they like to save all of their code on their local laptop. But you as the machine learning engineer who is working with this data scientist are going to encourage them to check their code into get. Yes, you are. All right, so we're gonna go ahead and now from the get option, we're going to have some other things that we can do such as the resources and advanced options. What we're really interested in is making sure that we click on the routing option because we want the route of this API. We need to be able to access the API from another location. So that very important route, we're going to go ahead and copy that, probably test that within a browser to make sure that we can actually hit that API. And then if you want to, you can go back into a Jupyter notebook and then using that route, you can use either a curl or you can invoke a web request to actually see if you can hit that API successfully. And then to test your deployed AI ML application, you can take that route and even go back into a Jupyter notebook and you can put in the actual API address. You're going to give it an image. So in this case, I'm giving it an image called car.jpeg so that it will see if it can pull the license plate number from that image. And wow, it actually worked. We actually have a car and we were able to successfully predict the actual license plate number. So all of this is on Red Hat Managed Cloud Services. And again, this demo was concentrating on the Red Hat OpenShift Data Science portion. And remember, we're trying to, with Red Hat OpenShift Data Science is to have a platform that's fairly open so that if you have a specific open source vendor that you like or if you have specific requirements where you want to use not only Red Hat products but open source products, you should have that choice to be able to do that. So what did we learn today? Well, we learned that managed services or in particular managed services for data science are really important to a data scientist. They're just not gonna sit there with their model. They have to have some way of actually going ahead and deploying their model, training it, testing it. And they have to do it in such an easy manner that they can accomplish that task themselves. And of course, IT operations will be there to help them with that journey. I think I got finished in time so that I'm not preventing you from lunch. So thank you very much. Awesome. Do we have any quick questions before we're breaking? All right, so I do want to remind you we're actually are doing the book signing now. So if you want a copy of Kubernetes operators, Jay is excitedly back waving. We'll be happy to, you can have a book and get it autographed. And as we said, we will be back at 1230 for the next session. We're gonna have some great customer sessions. So love to hear those practitioners. So thank you for joining this morning and please come back for the rest of it. See you soon. Without any notes.