 This is your opportunity to ask the Red Hat folks who are here in the room questions about anything you heard today. So I need a first person to come up and ask something. I'm going to hand off to Julio, and I will run around and bribe you if I have to. Otherwise, take it away, Julio. All right, thank you, Diane. First of all, I do want to thank Diane and all the people that have helped organize today's comments. I think it was a pretty fruitful event, so thank you. All right, folks, so this is one of my favorite segments. This is the Ask Me Anything. So I would like to open it up to the audience to see if there's any questions whatsoever. Product, strategy, technical questions, whatever you like. So I'm part of the Knative community, and I know that you guys released a preview of that. I just want to understand how important do you see serverless in your platform, and can you share future plans or even a wood map? Sure. So the question was around Knative and serverless, and what our plans are. Tushar, do you want to hand that? Yeah, so as you said, Knative is, for those of you who don't know, is part of the community efforts to do serverless, and Red Hat and Google and others have been kind of some of the foundational members of that. Knative and serverless is the core part of the OpenShift platform, and we have released this as tech preview in 4.2, which is the most recent release. So part of that is really, you know, so we are very excited about that. I mean, we're working with, I know there are plans also to, there's talk in the community about what to do about the foundation itself, and should it belong to CNCF, so those discussions are ongoing. But just to answer in short, I mean, that's really the foundational platform for our serverless technology, and we're fully behind it, and we are, as I said, tech preview, and we're looking forward to GA in the upcoming releases, 4.3, 4.4. You know, so there are various, as you know, I mean, there are various parts of it. We have another, yeah, I'll add to that. From a model serving perspective, we've been, we seldom have been part of a joint effort, which was a project called KF Serving, also includes Google, Microsoft, Bloomberg, IBM, Red Hat may be involved to some degree, but basically it's a serverless model serving platform, which is built on Knative. We're really excited about it from a kind of auto-scaling, more efficient scaling and scale to zero, and yeah, it's currently in a sort of 0.1 early release, and it's an independent project, which is part of the Kubeflow project, but it's not, it doesn't have dependency on Kubeflow, so you can use it within other ML pipelines outside of Kubeflow, and it's a framework agnostic and supports kind of pre and post model kind of compute jobs, so it's kind of almost like an inference graph. Great, thank you so much. All right, any other questions here? I'm gonna ask a leading question, because I get to do this, because from Sharada maybe to Sharer's point of view, what's missing in terms of operators for, what's on your wish list, things that haven't been built yet, operators that aren't available yet, that you really want to make sure get into Operator Hub and available for Data Hub? Given my background in data and security and customers threatening the sue, I would like to see more with Operator Hub in terms of how do we secure our data? We talk a lot about AI, everyone understands. Without data, you don't have AI, so more of a focus on partners that will help us figure out that whole security aspect of it and just make it as easy as it is to deploy a Kafka or a Selden or Prometheus. Let's make it also easy to secure your data and just really make that more streamlined. Hey, so I'll go ahead. No, I'll just add to that. I mean, if you think another angle to it, if you think about the Operator Maturity Model, like we have, I mean, the various levels that is kind of simple, install and configure to what we call full life cycle, then there is this, beyond that, then the question is, we talked about AI ops and Intelligent Operator, and so that's some of the cutting edge, which is kind of missing, but I think I'm sure people, I know of people working on it, so that's another area of where I think, which is, I think, missing right now, but could be filled with that. All right, thank you. So actually, so following up on that question about operators, one of the things I heard a lot about today was the complexity that a lot of these solutions require this idea of like a meta operator, an operator that deploys out other components that are operators, essentially. But today, as far as I understand it, there's no real operated operator awareness. Is that something you think that will become part of maybe an intelligence layer within the Opshift platform, or within the Kubernetes layer to have that operated operator communication? I've actually done that before. So within OpenShift and the operator life cycle management, you can actually create subscriptions, and I know that's a really confusing word and completely overloaded, especially by Red Hat, but the idea is it's bad. Anyways, so you can create basically a subscription on these servers that creates those dependencies to then install what you need. So in terms of storage, you know, I've done that where I say, I push a subscription to a new cluster, I wanna have my database, I wanna have it backed by SefBlockStorage, and I want it to install all of those operators for me and let me know when that's done. So it actually does provide some operational process through the OLM to be able to make that happen. So is that what you're referring to, or are you... So yes, but there's also this context of discovery, because for instance, I need these operators to exist, so I'm gonna deploy them, but what if those subscriptions already exist? How do I share keys or have some sort of operational knowledge between vendors, right? Because I'm from an ISV, we're a monitoring solution. There's lots of database ISVs that have the best practices about their database, but it'd be great if as a monitoring provider, I can interact directly with my partnership with that database company who has deployments. Just the operational awareness between operators I think is the thing that's missing. Yeah, so my impression is that we're just getting to that problem, right? So if you look at Open Data Hub is a meta operator. We have another one, I can know it as integrately... CVO, OLM. Like they're a bunch of them. And we're exactly at that point now, like where we just had the problem, oh, we wanna install two of them, right? And they of course all install the same components, right? Because Kafka is in everything. And I think it's a good time right now to actually bring that up on the community side and define the best practices. So we should follow up and see how we loop you into that discussion. Yeah, and I'll give you an easy one. This one is more, I mean, so if also, if a operator is dependent on another operator, so far with 4.2, it knows the dependency that there is a dependency now and it deploys that operator, which was not there previously. So that's just another way of a operator kind of being cognizant of another operator being there. Previously, you'd have to actually do it yourself, I mean, manually do it. Like, you know, but I think, like the one, the vision we want is that you can have, and then with the data, we are trying to get there specifically as having a reference architecture that can broker the relationships between multiple vendors, right? Our vision for the data is not that Rated becomes a vendor for an AI platform, right? What we want is that OpenShift is a platform that customers can deploy flexible AI platforms and in that, get an experience that's integrated across the whole ecosystem, right? And like, you know, so box, but you know, at the end, like right now, you have the hyperscalers with their integration capabilities and centralization, right, which is, it's just a very heavy model and there's not much room for the rest of the software industry, right? But they set the standard of what customers expect from an experience and integration point of view, right? And with operators and Kubernetes, we have the capability to counter that with a distributed integration model. But for that, we need to figure out how to negotiate the integrations at scale when customers basically on the fly deploy combined solutions, right? And so that's why this is an extremely important topic that we really need to sync up on. Thank you. Great. Another question. Come up to the mic. Over here. Yeah, so my question is about, we are doing AI. So AI on online or offline data, we are doing, I mean, how real time is it? Like in OpenSIFT, per se. And also data we are collecting from, let's say, how AI can be used on data collector from IoT. So various devices and we are collating and putting data in OpenSIFT and then doing AI ML on top of that. Got it. Zed, who wants to take that one? Okay, yeah, so first one before the IoT. Yeah. So on the IoT, that's a really big challenge, really, ultimately, and one of the solutions to this will be to compute at the edge and use the federated learning techniques that we discussed earlier. And what was the first question again? Sorry, I missed that. Yeah, offline and offline data. Yeah, most data, most models are batch and they're trained and then sort of released and then retrained and used. So the frequency sort of depends on how important that sort of most recent bit of data is. So there are some online algorithms, but the majority you'll see out in the wild are batch based. So I want to add a little bit to this as well. So when it comes to the IoT side, my argument on the IoT space and on the edge space is that the infrastructure, like infrastructure is very important. That's why everyone is here. The infrastructure when it comes to the edge is a whole new level because we're now talking about infrastructure that is owned on an edge data center or an edge device. How do you connect to it? How do you make sure that what you're connecting to is who it says what it is? How do you set up the policy behind it? And that's before you even start going after the technical sides of it. And before we even get to proper IoT where you can literally have devices move and be anywhere, there's also issues with the radios that we have today. Like if you look at a standard 4G deployment to the density that you're gonna have per square mile, I think is around 4,000 devices, which is why when you go to a conference or to a concert, your phone stops working. When you bring that to the 5G space, then the 5G space is, I think it's three or four million devices per square mile that we're able to have from a density perspective. So for me, the exciting part is not the speed, like how fast can I download something, but it's like how many devices can I get in in a very small amount of area. But these are like some huge technical challenges that we have to solve. And almost all of them are to start off with are gonna be on the edge. Eventually we're gonna have all of the ethical issues and various other things that we spoke about that are gonna pile on as well as we start to develop these. And so like we have a whole lot of work that needs to be done there and a lot of this stuff hasn't been defined yet. So there's a huge gap there. Of course, people are doing data collection at the edge today, right? But you'll see progressively, there's open source work happening to figure out how to have filtering, how to push decisions out to the metric. Ultimately, looking at just OpenShift metrics itself, right? We collect a lot of metrics in OpenShift 4 if you don't opt out and recommend you don't because it actually really beneficial for you because we can tell you whether your cluster is broken or not, right? And we can predict certain things based on what we see. Like at the end, like if you look at Red Hat's businesses, it's generating knowledge about open source software. It's called a customer's pay us because we know things or we can figure them out and then we can fix them, right? And often it is because we have many other customers and we probably have seen the issues that you were running into before or we can, it's like a herd immunity thing, right? If you send us your data, your operational metrics, we can probably identify patterns based on what we've seen somewhere else, right? The problem is that doesn't scale we can't collect all the data. So we'll get to the point that we actually have to push decisions out to individual nodes, right? When this goes deeper, right? You can think about like simple like optimizations in the like just small data optimizations in the kernel or in the tool chain in Linux, right? Where you replace static heuristics with machine learning that totally makes sense, right? You can't like wait, like you can't wait for the cloud to take a decision, right? That's not only when I'm in a self-driving car, I don't wanna wait for the cloud to tell me whether to stop, right? I don't want that in my IT. And we can't, we don't have the bandwidth to send all the data. And even if we could send it from the bandwidth, we don't wanna store it all, right? So you need to figure out how to take push decisions out and basically identify the interesting data that you wanna learn from, right? And then figure out how you either federate the learning or you send enough data up to be useful, but not too much, you know, not more than you can handle. So that's a, I think it's a general problem that everyone has and that being, there are solutions today. And I think you're gonna see a lot of innovation in that space and open source over the next couple of years, right? Because- Yeah, I agree. But having said that, I mean, there are people who are using, I mean, you know, there are each edge use cases that people are using today already. So I don't want us to leave with an impression that this is all, I mean, for example, I know of use cases wherein, for example, at an airport, you want to, you are getting a camera feed from the gates and, you know, and there are companies who are out there, ISVs that can analyze that camera feed, can use a model which has been trained somewhere else, maybe on Azure or maybe on AWS. And then that model is deployed locally at the airport so that it can do real-time analytics of that image, right? So we actually know of people who are using OpenShift to try to solve that problem. So it depends on how you look at edge. So some of that already is happening. That's what I had like that. Yeah, yeah. So and like I said, I mean, there's another example of a, of a, I don't know if you want to say, but there's another example of somebody in the railway stations, right? Same thing, you know, how do you use AI to do the same use case? Yeah, one thing I wanted to add was, yeah, that's a very good question, right? And as a customer, you'd like to see reference architectures and the solutions that okay, what does the solution look like? So I think as a vendor community and a partner community, we need to go in the direction you can provide you prescriptive guidance in the architectures, right? That, hey, this is how you, I do this kind of a solution from the cloud, to the data center, to the edge and the use cases and so on. So that way it becomes like easy for you to operationalize for your use case in your environment. So say I'm a customer and I have a specific use case, how do I plug into the whole open data structure so that I can get my use case looked at? Who would like to maybe, Sherard? Yeah, we're actually trying to make that process simpler. I think the best thing is we work a lot with the field and really just getting in touch with the field, getting in touch with Tashar. He drives a lot of the use cases that we see from an AI ML perspective and he's probably a better person to take this question, but we spend a lot of time understanding the use cases from customers and really driving home, what are they trying to do? What's the value that they're trying to bring to their customers and why it's relevant? But then also we apply those internally. You heard a lot about the open data hub and how we use that internally at Red Hat. We have our own set of challenges and our own set of use cases that we bring to the table and we bring it into an open environment. Anyone can join the community. We'll be starting to have community meetings where people can just chime in and share their use cases with the people who are actually contributing to open data hub and just make it more of a conversational piece. Okay, what are you trying to solve? Is there a reference architecture we can provide? Is there more information about tooling that can be added to help out with the use cases and really just kind of drive it from the top down, make sure we're focusing on the use cases and that drives what the community feels like it needs. And not only are we doing open data hub. The open shift comments, SICK, is we have a special interest group for ML that is a great place to engage. And so Sharad's team is participating in that directly. And so you can have a conversation there about use cases. No, I mean, so I was talking to a friend from Microsoft like a coffee break or something and they are also kind of doing these kind of reference architecture. So we were talking about, you know, they participating in that SICK or open data hub or whatever, I mean, they have some plans too. But I think to, I think the question, Julio's question, I think there is more and more interest. And obviously these commons gatherings, one of the things that Diane wants is more and more customers to come and talk about their use cases because that is, gives real validation. Those are real world problems. That's where we, you know, we get these reference architecture. So great. Can I add something real quick as well? Yes, go ahead. Yeah, so yeah, when it comes to these SIGs, I mean, these things in the Kubernetes space, so I spent a lot of time working with the CNCF and Linux Foundation, networking and similar, a significant portion of the important decisions occur on the SIG calls on the SIG mailing list. So when they're saying join a SIG, these are some of the most effective ways to get involved in order to make a change. So please, like, whenever, if there's something that's important to you and you see that there's a SIG that's attached to it, whether it's a Red Hat or organized one or a community one through CNCF or the Kubernetes SIG, please get involved. Please say what your use cases are, where the gaps are, because they're not gonna know what the gaps are unless you tell them. So like, and even better, if you have the resources to join in and actually send some contributions on, they're always hugely appreciated. There's a long tail of things that need to get fixed. And the more people who fix those that long tail, the more that the core engineers can get done and fix your critical problems. Great, thanks. So another question that I get very often is around working with partners. If I'm an ISV partner and I want to be part of the Open Data Hub, what's the best way to connect and engage? Would be a good... That's another good question. We actually have, I guess, Ryan King, I don't know if he's still in the audience, but he helps out a lot with the partner environment and just figuring out how do we have a relationship and how do we kind of grow together. But I think reaching, I guess, what's the best channel, Daniel? I think Open Data and the SIG might be the... Yeah, the SIG, it's another way where join the SIG, join OpenShift Commons. I'll answer. And then the people who are responsible for working with the ISVs and the partner space are all on those SIGs. So that's really kind of pulling all of it together. I don't know, Diane, if you wanna... I'm just about to give them all those roadtips. Right, so we'll hear more about that, but what we do, maybe what we do with partners, it's we do active enablement, right? We can actually actively help. And that also, because for us, the space is very customer-driven, right? So if you're an end user and you are trying to do something and you have specific tools or ISVs you wanna use, you can also talk to RATET through your existing RATET channel through the SIG, through the OpenShift Commons SIG. And you help us prioritize which problems we solve and which partners, for example, we pull in, right? That's why we have partners up here on stage, because we had customers who had problems, so we reached out and worked together to solve customer problems. And it's really what this is all about. The process was obviously very efficient. As CEO of a sort of scale-up company based in London, I had very little involvement as CEO, but what I did see is that members of the, there was a small number of team members, I think it was one engineer and one of our business development people coordinated it, and there was a very efficient certification process. I don't know if you mentioned that, but that was step one, and then there was the Open Data Hub opportunity which emerged from that, so maybe that is an interesting route to try getting on board with certification. Yeah, one thing I want to add is, yeah, if you're into IT, right? So be friends with the data scientists and the data engineers in your organization, because they are looking at all kinds of tools, right? And at the end of the day, you may be called on to support all those tools on your infrastructure. So from that perspective, if you're able to get ahead of the curve and find out all the tools that are the favorites of your customers, like as in the data scientists, data engineers, so that way we can prioritize those as well in terms of the operators' integrations, through the Red Hat account team or the SEG and the special working groups. All right, Diane, did you want to do something? I've got a couple of slides that I'm going to throw up there so you can figure out exactly where to go, but I want to thank everybody for... All right, well, thanks, everybody. Thank you. Yeah. All right. Thank you, Julio. That was good. All right, Seledun and doc.ai.