 Before we start that, just want to big thank you to Diane and the whole team that helped put this together. We're going to have, as Diane mentioned also, we've got some of the lightning panels after. There's still more swag. There's actually a raffle, so hope you will stay for all of that. But do want to open it up first for the audience here. As we said, we've got, if there's any of the sessions that you ruminated on a little bit and want to ask some follow-up questions, we've got data science specialists, security specialists, areas from all over the communities. So please, if there's questions, let's start there. Don't be shy. So Clayton, maybe we're going to come to you, because you have no shortage of conversation typically. So just a quick introduction, hopefully most of the people in the room saw your presentation, the KCP stuff earlier, just what you're working on these days and yeah. Sure. Clayton Coleman, I used to be the OpenShift Architect, and now there's a plethora of leads and architects. And I've kind of stepped up a little bit to look at problems across the whole ecosystem and OpenShift is built on Linux, it's built on top of OpenStack and virtualization and public clouds and private clouds. So I spend a lot of my time kind of in that space that Stu mentioned, trying to think about how we can do more to support the pieces coming together. And so KCP and some of the stuff I talked about earlier today is an attempt at least to look at patterns that we all are hitting. And actually, the real call to participation is, come find me today in this meeting if you have questions about patterns that you'd like to see more broadly applied, patterns that you noted that are important to you. There's a whole host of things that come up every time I've had a conversation with anyone about this topic, and it's kind of that part of open source, which is, it really is important for people to say what they're doing, because frankly, everyone doing this stuff is much better at it and knows the trade-offs, knowing those trade-offs actually helps us make better decisions about what we invest in, where we devote effort. I was having a talk with Keith McClellan of CockroachDB earlier, and a lot of cases that challenges that customers are having setting up CockroachDB and hybrid environments boil down to very simple things that we can improve, pretty low down the stack that if someone brings it up, it almost always is, well, this is an obvious thing that we can do to improve how customers and partners and community members work together to find the small wins, and actually those small wins can have big impacts. So, you know, this is the best time possible to ask those questions. Great. If you would pass in your microphone up front there, and Peter, maybe if you don't mind standing just to say hi, just because I'm just looking at the camera angle, it helped a little bit. But, yeah. Hey, everyone. My name is Peter Hunt. I'm a senior software engineer, and I work primarily with Cryo and like the node-level stuff. Sometimes run C, sometimes a cubelet, sometimes podman if I'm feeling fun. So, yeah. All right. We'll just go down the line here. Sure. I'm going to just remove my mask because it's hot and difficult to speak. But, hey, everyone. I'm Oindrila Chatterjee. I work as a data scientist in the team AI Center of Excellence within the office of the CTO at Red Hat. And we work on various emerging trends in AI and ML within our team. And I spent the past year working on building AI and AI ops tools for CICD data while building these solutions and tools on OpenShift. So, yeah, that's a little bit about me. I'll hand it over. Hello, everyone. I'm Akanksha Duggal and I'm also a data scientist in the AI ops team at the center of excellence. I am based in Boston and I've been working on the AI for CI project. That is a tool that helps you monitor your CICD processes. And I'm also a data scientist who leverages all the open source and platforms that Red Hat provides us as data scientists. So that's pretty much about me. Hi, hello, everyone. My name is Hugo. I'm part of the product team that is part of the applications group. So we are doing all the workloads on top of OpenShift from Kafka to API management that you've seen in the previous session. So if you have any questions on running your applications, your integration, your APIs and Kubernetes on OpenShift, just let us know. Hey, everyone. Andrew Block coming to Singlish Architects with Red Hat Consulting. I work with customers across the globe to implement container solutions, OpenShift, anything. So over the course of time, I've probably seen it. The good and the bad. So if you have any questions, come on over and I'm happy to have a chat with you. Hi, I'm Annette Cluette. I'm with the platform group and recently I've been working on multi-cluster disaster recovery, especially how it applies to using like Rook stuff to do mirroring. And in particular, also advanced cluster management orchestrating all of that. Thanks. Hi, folks. Kirsten Newcomer. I lead the security pillar product management team. That includes Red Hat Advanced Cluster Security. So we focus on ensuring that OpenShift is hardened by default, that we provide automated, give you the ability to automate compliance with security and regulatory controls, with the compliance operator continuously investing, working closely with the CTO's office on things like Key Lime for Attestation, SigStore, also with Andy Block on SigStore, kind of making it easier to add signing into the CICD process. Did somebody say slow down? Oh, okay. It was the other room. So tons of stuff, plus runtime security, working upstream with Coob Security Sig also as they work to replace pod security policies. We're going to continue to support security context constraints in OpenShift, but also work to with things like OpaGateKeeper, Kyverno, and as the community evolves the pod security, which is the new name for what's going to replace pod security policies, drives me nuts. We'll be working on that too. Runtime behavioral analysis, deep observability, all sorts of stuff coming your way. Come find me if you have questions. Hi, I'm Karina Angel. I'm on the OpenShift product management team. I cover CloudPacks, which our IBM is one of our largest partners. So it kind of covers a lot of areas and the lessons that we've learned in implementing and running CloudPacks on OpenShift have really helped the rest of the product. So you'll find areas just across OpenShift that are just better for what we have learned with CloudPacks. And I also cover some upstream work, open cluster management. Talked about earlier today, that's going into Sandbox for CNCF, KubeBert, a lot of people are interested in KubeBert right now. That one, we're getting into incubation, Helm, and I also work with Andy Block. So I think almost everybody knows Andy. I'm a Helm maintainer as well, and we have a talk tomorrow morning. So, yeah, so cover a lot of different aspects. Hi, I'm Daniel, I'm technical marketing major, most likely developer advocate at Rahat. And I spend a lot of time to evangelize the Kubernetes delivery application like Quarkus and Spring Boot, and also like a data group or something like that on Kubernetes and OpenShift, of course. And also, I'm responsible CNCF ambassador and specifically this KubeCon, I'm responsible serverless track chair. So I specialize the serverless and the service mesh to integrate the cloud network application. Yeah. That's it. Thank you, Daniel. So let me see. We actually got a question from the virtual, Diane had fed into us from Hoppin. I'm waiting for her to type it in. So I've got a piece, but just real quick, as I told you at the beginning, my name is Stu Miniman. I joined Red Hat one year ago today. I'm on the OpenShift product marketing team. I do lots of executive meetings with our customers. I was an analyst for a decade, so I do a lot to talking to our press and analysts. If you would attend to this show, I was one of the hosts of the Kube for basically since they started that decade ago. So exciting times, everyone here, one of the nice things if you made it here in person is we have a little more bandwidth to meet and talk and go a little bit deeper. As Clayton said, this is like the hallway track that we've all been missing and you get to do it like all week. And that's mostly who showed up here. So we really appreciate you all coming. Also if you know people that are looking for jobs, Red Hat is hiring. There's a hiring social Thursday morning. If you hadn't heard about it, please let them know. I know the technical marketing team, we've actually got like five more associate level positions open. And yeah, yeah, right, product management and engineering, there are a lot of openings. So it's good times. Please look them up and just find anybody at Red Hat. We love to help connect people and tell them that. But I did the question that came in now is with many products interacting with each other, how to maintain the SLDC lifecycle across both product and operating system. So yes, that's fun. And we might get you go to weigh in as well. I don't know. So it's a large team and there are a lot of a lot of parts. So we do quarterly planning. We have to kind of do relatively relative alignment between rel releases, right? Rel Core OS is built from rel binaries. So OpenShift 4.8 uses rel 8.4 binaries. 4.9 will also use 8.4 binaries. 4.9 is coming out any day. And then ACM has aligned their releases with OpenShift releases. So just as Kube has gone to three times a year, so will OpenShift. And ACM releases typically about two weeks after that. ACS right now is moving from a three week release cadence to a six week release cadence. And we'll be figuring out kind of over the next year, whether we're going to maintain that or get more closely aligned with OpenShift. We'll kind of see how that goes. And then most of the key components line up. So like service mesh releases pretty close to an OpenShift release. OpenShift data foundation is slightly different release cycle. Somebody else here might know better than I, but delayed a little bit. Annette would know. OpenShift data foundation, which used to be OpenShift container storage. Pretty much tries to line up with OpenShift, but usually we're a little bit off, a little bit delayed compared to the OpenShift release. So we keep working at it. We have a lot of coordination across the teams. We have a large program management team also that kind of helps with that. If there are more specific questions or individual pieces that you care about, let us know. But we do have life cycle pages that lay this out on redhat.com, redhat OpenShift life cycle. And then also it will have a reference to layered solutions. All right. Any other questions? Hi. So this might be a slightly tough questions, but you have enough people. So a few years ago before the world change, I remember the big discussion was how Kubernetes is boring and all this. I always knew that was kind of BS because there's so much to do. And for me, I see three areas. And I want you guys to talk about specifically what redhat is trying to do. So one is like multi-tenancy and scaling. And obviously you talked about some of it, but I haven't seen anything done around multi-tenancy so that instead of having everybody solve multi-tenancy themselves for large scale SaaS, it can be part of the system. Another one is security. And obviously there's a lot of discussion on security. But one that's sort of hot is this code signing chains, like six store and so on. And that's certainly hopefully something that's already in the line. And last thing is ease of use because every single customer I talk to, and I talk to hundreds, everybody says how difficult it is once you don't have any enough experience, obviously. So those three areas, if you can speak to that. We got the whole week to cover that one, right? Yeah. So I'll take the first one real quick. So multi-tenancy is one of those things that, I mean, even before Kubernetes, before any, even before we had namespaces, right? A redhatter helped drive the design of namespaces in Kube quota limit ranges. We spent a lot of time on security context constraints, pod security policy evolved. There are only certain problems that can be solved inside a Kube cluster. And so part of the talk from earlier and where we're trying to think of is what are the pieces of multi-tenancy that are useful? And you asked a very good question earlier about federation. The thing that federation lacked was any real concept of how you break apart the individual problems so that you can evolve those independently. And I'll give an example here. So if you have 70 clusters, you have 70 different versions of operators, software, API, lifecycle, you can automate those to bring them into alignment. But each one of those is a unique failure domain. And that's how Kube is designed. What I'd like to see, and I think what we're kind of gearing up is there's small efforts, medium-sized efforts, and big efforts. I want to talk about the big effort. The big effort really is to help us do API evolution at the large scale. So imagine that you have a integration that you want to roll out to, let's say, 10,000 applications. How do you roll that out safely? You need to decide who's going to test it first. How do you test it close to production? How do you roll that out in a controlled fashion? What happens when someone is using one field, one very specific combination of behavior, and you break it? What are the metrics that tell you you just broke 10% of your fleet, 15% of your applications? What if that issue only emerges later? How do you work backwards from that event? So there's a lot of problems inside a Kube cluster, CRD's extension of Kube. We're never going to support tenancy within a cluster of APIs, different APIs for different namespaces, because it's a fundamental characteristic of Kube. And that's partially what drove that higher layer question is, we can take those concepts. We can take a chunk of APIs with namespaces, and RBAC, and all that magic with the existing Kube APIs, and we can break it up into little pieces. I think that's one of the ingredients that we need to have available of the ability to say I might have 10,000 teams, 100,000 teams. And your question about scale is increasingly organizations run the gamut, they might have one team, or they might have 100,000 teams. And Kube solves a part of that problem. In open source, a lot of us actually recreate those same problems over and over again. How do we hand out resources to teams? How do you give people access to Cloud account? How do you parcel out infrastructure? How do you do cost management? How do you give access to certain APIs to some people, like the ability to create clusters and take it away from others? We would, I would really like to, and this is a key part of our investigation in KCP, is try to break those little chunks up so you can say, I get an API space that feels Kube-like that I can do all the things in. I can add new APIs, but those are mine. Then I can go to the next level, scale them out. That's like the big change, but below that would be, okay, well then that would help us in ACM. That would help us in Argo. That would help us in CI. How do you give people access to new types of APIs? Like maybe you get part of the API for pipelines, but not the other part. We don't have a lot of tools to control access above a cluster. And so there's an element of investment in that area. And as we go down further in the stack, there will be implications to it, but I really do think that you can't build multi-tenancy into Kube as it is without breaking too much of what we do. So we're gonna try and take Kube and list it up to that higher level. I think that, yeah, that's a great summary. And also we're also investing in separation of control plane and data plane, which is another significant area that enables serious multi-tenancy, right? And give our large customers the ability to have a control plane that can manage multiple cluster data planes. So existing multi-tenancy, as Clayton said, there are limitations to what we can do, but there's still a lot there. SC Linux, SCCs, RBAC, namespaces, a lot of things in place, and a lot of investment going on in Red Hat to really enhance that space. I'm gonna ask Andy to talk about the SIG Store question. You read my mind, you read my mind. So as you know, SIG stores and projects that Red Hat is being actively involved in from a product management side, Kirsten will certainly attest to that we're doing a lot of work within Red Hat to bring a lot of those tools into our ecosystem. So in the future, you're gonna start seeing more of those part of the product itself, everything from the fundamental Red Hat Enterprise Linux core OS layer, all the way up through OpenShift container platform, as well as the container of just one aspect of your entire software supply chain. Looking in aside from your container image, you wanna also think about how are you protecting the source code? How are you packaging all your dependencies? That is just something you need to think about and something that we'll be working with the open source community as a whole to help evolve the concept of an S-bomb and other tools similar to that. So begin to look out, some good stuff is on the way. Easy to use is one of the hardest problems, I think, which is what are we trying to make easy? And one of the things we often notice is there's so many different ways of making things easy. OpenShift actually from the very beginning was about trying to streamline that different process. That was Paz was really an attempt to make the first experience simple and to keep it at that high level. Fortunately, reality is a lot messier and the Paz that we ended up with is this wealth of different choices. Some people may wanna trade Tecton or Argo for Jenkins or a more opinionated build flow. A focus for us will be trying to bring together an experience that makes the application development story that we're all using, a little bit more effective and well integrated. But I think there's some real, I think this is a hard problem that is a combination of the need for capability within our ecosystem, combined with every additional bit makes things more complex. And it's where those things intersect that it really gets hard. The power of a Tecton pipeline. Tecton pipelines are pretty darn powerful, but they can't do everything when you need to cross out of that. How much do you abstract pipelines for your organization? And I think we're always looking at this. Our focus probably in the near to midterm is trying to build experiences around those common paths that I mentioned, looking for ways of, what are the common patterns that work for 75, 80% of users and really drilling down on experiences that try to hide details that are there. And if everybody stopped asking for new exciting features that make all this stuff, I think this would get easier. But then that would be boring, as you said, and we wouldn't have much to do. And we wouldn't go to KubeCon because there'd be no point. I'm gonna add to that. So with each release, we have a lot of product managers and a lot of engineering teams, right? Just covering the entire platform. With each release as Kubernetes is maturing, as OpenShift continues to mature, that gives the teams opportunities to further make it simpler, right? Because hard is easy. Some bite. That's a quarter of the weekend. Thank you, thank you. And easy is actually very difficult. So yeah, like I said, like each release, every team is looking at this. And you can see it with each time, like four nine, when you start playing with it, you'll see like there's different aspects that are easier. I talked earlier about how Argo, like just the UI is just simplifying the different UIs and not having to go different places to do things. Yeah. And it's that across the entire platform. So just wanted to add that. We did get another question online from courtesy of Diane. It ties in a little bit to some of this, but something we all look at is developers, what do they need to be aware of and think about for future fixes? This question specifically asking about security risk and vulnerabilities, what container images they're using in building and vulnerability drift at runtime. So I think a couple of our presentations today on GitOps covered a little bit of it, but yeah, Andy, Kirsten, you know. I'll start and Andy, I'm sure we'll weigh in. So one of the things we talk about internally, a certain amount is that, is the state of vulnerability scanners, which is frankly a challenge, right? One of the things folks are dealing with is an overwhelming amount of vulnerability data that comes out of scanning an image, et cetera. And so there are a couple of angles to take. One is for the developer, the earlier you can find the information, the easier it is to fix. So using things like IDE plugins with sneak data available to you from Red Hat with your OpenShift subscription really can help. Yes, you wanna use image scanners on your images that are stored in your registry, ideally a certified image scanner so that you get Red Hat data. If you're using a Red Hat base image as part of your custom build, you know, that'll give you data that links to fixes as well, but it's still overwhelming. So there's a supplement you can look at. Like if you can scan, leverage something like Red Hat Advanced Cluster Security or some of our other security partners that give you runtime behavioral analysis and runtime context. I mean, you don't wanna wait until then, but leverage that data, leverage, use it on your test cluster, right? Don't wait till production, use those tools in your test cluster so you can see which vulnerability, which pods are actually exposed to the internet if it were going to be exposed to the internet, right? And contextualize and get a little bit more information to help inform your focus and do some risk assessment. I work with a lot of development teams at different organizations. And one of the key challenges I see is they're just getting into containerization still. I mean, some of us in the room, they've been doing containers for many years. Some organizations still are at their infancy or pretty young in them. The challenge we see is that you involve the security team too late, which is why you caused your developers to bang their heads on the desk multiple times because they will spend hours and hours developing the best code ever. It works fine in development, et cetera. They don't actually turn on scanning until they hit towards production. And they realized, oh, all that time that I wasted building my container perfectly has a vulnerability because I did X, Y, and Z incorrectly. If you tell them ahead of time using tools like Kirsten mentioned, IDE plug-in scanning tools, make it easier for developers to become self-sufficient as well as self-aware so they can better with themselves because anything that they can do to get the process down faster, get releases out faster, will then make their product managers even happier. I was actually gonna touch on a point there talking about moving things left in the pipeline. One of the things I think has been most successful about open-sources when we pick technologies and patterns that actually force ourselves to, they both make a problem more obvious and they make it easy for us to get in our path. And so a lot of the things that move security left is you're taking a lot of frustration among a few people early and you're distributing it all the way out to the people who actually ultimately are the ones who are gonna have to make the calls on what makes that sense. And so I think a lot of the process when we talk about ease of use is can we get a commonality where people see this kind of information early in their process? IDEs is a great place for it. Can you move the problem to a closer to the actual person who's affected? Sometimes that means that the net annoyance is actually higher than one security person who's at the end of the day trying to make things but it scales better. And ultimately that frees that security person to go deal with the actual problems like real vulnerabilities, improving the process. So again, a lot of what we can do in open source is help build in these parts of our process to the tools and the technologies so that we don't ask the question, should I scan, it happens automatically. All right, well, I wanna thank everyone on the panel, the speakers today and everybody participating today. Thank you so much. Thank you.