 Okay, I hope you're ready to come here, ready for some edge computing stuff that we're going to talk about. Edge computing is hot. What this session is to talk about the business value of edge computing. What it is not is to what is edge computing? We're not going to talk about that. If you want to go more in depth, there's a lot of Linux foundation stuff that you can download. My name is Larry Carvalho. I'm an independent analyst at Robust Cloud. And with me, I have three panelists. I would like them to introduce themselves starting with Marilyn. I get to go first. Hi, everyone. My name is Marilyn Basanta. I work at VMware and run all the product management for our edge computing platform. Stu. Hi, I'm Stu Miniman. I've been coming to the KubeCon since the early days and back at the open stack shows before when we were starting to talk about containers. I joined Red Hat two years ago. Director of Market Insights, a part of the OpenShift team. Before that, I'd spent a decade as a analyst and host of the Kube. Minim. Hey, my name is Minim Nazadeen. I'm the chief marketing officer for networking and edge business at Intel. I've just been there 45 days. I was at VMware, building the edge computing platform as a general manager. Okay. So what we put together over here for you is to talk about different use cases of edge. I mean, edge is running all over the place when you think about agriculture, when you think about mining, you think about healthcare. We've got a lot of edge solutions. What is the business value that you can drive maybe in manufacturing? Can you improve maintenance, throughput, improve quality? Those are the things that we need to think about when we bring up edge. So with that, what I'm going to start for the first is with Minim to talk about one of his use cases, spend some time on that. Then we'll have the rest of the other two panelists talk about their use cases. And then we will go to more of a discussion Q&A within the panelists and outside the panelists about that. Okay. So with that, Monif, I'll bring up your slide and if you can. Yeah, I think we decided to kind of showcase what we did for smart cities and built out a roadside unit. I know we said we won't talk about what a definition of edge is. Edge is an interesting phenomenon that has been there here for a long time. What we see is workloads being built at the edge more often now than in the past. And like Larry just said, they manifest themselves in different verticals because they saw it being a particular problem. So I think all of us have a vertical use case scenario that's happening. In this instance, and there's a lot of convergence of technology happening there too. In this instance, we've converged 5G connectivity to AI inferencing at the edge because we feel inferencing at the edge is a major driver. A lot of new data being generated. How do you infer that? A lot of that is trigger based. There's a lot of data coming at you. Storing that data is not a lot of value, but inferencing what outcome you want to come out of it is what the value is in. So how do you actually do inferencing? We built here with Cap Gemini as a partner and we're testing this in cities of London and tier in Italy. And they are roadside units which are connected through 5G and they are spread around the cities and they are doing centralized traffic management on multimodal transport. So your road, your railways, your port, all of that together and coordinated. There's a level of a control plane which manages the central city, think of it as city level operations. And then there is individual edge units and each unit has 5G connectivity. It has a small edge compute platform. It has, you know, vision computer vision all built into it. So we're doing inferencing of those applications and I'll follow up with, you know, in further kind of questions, you know, certain scenarios, but, you know, traffic management is a start. We're also doing traffic management for, you know, we're testing with emergency services. So I think that's the important part. So this ambulance fire, you know, police, you know, those are kind of the high stressed situations and the application level compute that's happening is just inferencing, looking at traffic patterns, looking at, you know, incidents, looking at all of that and centralizing it. And, you know, we've used, you know, our 5G software stack as well as our, you know, AI inferencing stack and run it on a really thin edge kind of platform which is built out. So amazing kind of take where you have multiple cities around the world willing to kind of jump on this. Yep. So as you can see on the slide, you have, you have all of these open source products that have been used. You see what Intel products are used. And what I find fascinating about traffic management is all the edge devices that you need everywhere from, from the, from the roads to, you know, to the utilities when you have it, you know, traffic signals, etc. That all have to work in cohesion with each other. So with that, we'll go to, you know, next with Stu, you know, who's going to talk about what edge computing is in defense and security. Yeah, thanks Larry. So yeah, the use case we're talking about here, this is actually the central IT department for the Israeli Defense Fund group called Mamram. And, you know, it's funny, I think back many the first conference I went to that was like really talking a lot about containers was, you know, back, you know, IDF the which we used to joke it's the Israeli Defense Fund or is it the, you know, Intel developer forum my daughter had a backpack front from that one and people would get confused. So, but the Israeli Defense Fund that when we talked about containers in like the earliest days and it just reminded some of the basic things, what can I do, I can spend things up faster I can just accelerate my applications because the unit of it is much closer to that application. And so as it shows here on the slide, you know, things that used to, you know, oh, we used to do our build, and we used to do it with VMs and it took us weeks and, you know, now we can take that down to hours. And the other thing, you know, I think a common theme you're going to hear from all of us. AI ML are great workloads here because again, if it's I need to be able to do I do my processing back at some central IT and the cloud, or can I do something at the edge so I heard a great line recently that at the edge. What we really need to become really efficient is killing the data at that location because we all know how there's that explosion of data at the edge, but I don't really need all of that. So, in this case, there's many times they want to be able to process it there. You know, we've got Kubernetes running up on the International Space Station and obviously if I had to think about every time I process something, you know, sending it back down to the earth that that could be a challenge in a lot of cases. So again, some really interesting things. One of the open source projects that we list on here, it's Kubeflow is the basis for the open data hubs. So a whole lot of data science tooling that's available that this customers and many others are being able to take advantage of. So, yeah, I think just accelerating helping their developers. We did a lot of training for them to get more people up and running because there's that dichotomy. There's new skill sets that we need to learn, but at the edge, typically you don't have the people resources. So that's why we have to have a lot more automation. We want to be able to, you know, it's not taking people out, but it's having people in the right location to be able to take care of what we need. And typically at the edge, you're not going to have that highly trained skill set to be able to take care of it. And the last point on this, of course, the general message you'll hear from Red Hat is consistency everywhere. So Linux lives everywhere, OpenShift can live everywhere that Linux can. So this environment was based on what's known as single node OpenShift. So if you know just a Kubernetes architecture, normally I need, you know, how many compute nodes and worker nodes and how many systems that is. Well, single node OpenShift allowed us to boil that down to a single machine with both the control plane and the worker nodes, which helps, of course, if it needs to be disconnected. So this can work in, you know, semiconductor disconnected environments for time. And when things get reconnected, they can, you know, share the data that they need. So I think we'll have some more questions to dig into it later. But, you know, that's the overall. Thanks. Thanks to so one of the things you're seeing over here is the whole ML AI aspect of edge. What I have seen as edge has been progressing at three different areas that there is some innovation that is helping edge. There are a lot more. But the three that I see obviously the ML AI on the edge where you can use it for image recognition and several other aspects of, you know, what, what can be done on the edge. The second part of it is 5G and networking, you know, the speed that you need to, you know, connect with these devices and obviously all the security that that goes on with it. And the third area is more hardware acceleration at the edge where you have specialized chips or made for purpose chips that are built for the edge. And I think these three aspects of technology evolution is really improving the evolution, you know, off edge in the market. And in this case, you're seeing one of them, but obviously behind the scenes is using the other three aspects of the two aspects of edge. So with that, again, you see the open source products, what red hat products are being used in this solution. And with that, we'll go to to Maryland to talk about the VMware story about the global food provider. Yeah, hi everyone. So similar stories. We got smart cities. We got defense and so I'll talk a little bit about manufacturing or processing food processing. So this is a customer, it's a food processing customer in the U.S. And they've got multiple different processing plans. And so what what they've been able to do for each different processing line, they've deployed our edge compute offering. And they're doing they've got cameras all over the line and they're basically doing inferencing at the edge as well. It's interesting. I didn't think we were all going to talk about inferencing, but we they're doing inferencing on the line, meaning that they've got the different workers physically taking the meat off of the bone. And what they're doing is they're using the cameras and inferencing technology. So after the worker has done their job and put the to separate the meat from the bone, they put the bone back on the line. And then the inferencing is actually analyzing that in real time. And then as that bone comes down the line, if they realize that maybe the worker didn't get all the meat off the bone, or there's something wrong with it, they can kick it back onto the line to come back to be processed again. And so it kind of does a few different things. One is it becomes more business efficient for the company. And then second, they become more sustainable as we know how impactful it is for, you know, to grow a cow and to grow animals for our food processing. It's very important for us to be able to be more sustainable. The nice thing, a few advantages here is that this particular use case for them has actually completely paid for itself for them to enable than other use cases. They can do then do predictive maintenance because they've got they can connect the sensors onto our platform. They can also then do more clothing. What they're doing also is close the automation to be able to change the different arms to kick that bone back onto the line. And they can also then do training with the workers because they're monitoring as the workers are working. They've got it for safety of something where to happen with the workers as they're doing their job. So it's just it's having that edge competing platform at each of these processing lines besides the economic advantages and opens them up to be able to do all the other different use cases of connecting so many different discrete systems together. I think the super interesting part about manufacturing as well is that manufacturing is bringing together. There's still a lot of use cases happening physically. So when usually other businesses in previously have done inferencing at the edge, they'll do that on the physical box with the cameras. It's a completely separate system from anything else to have running on the factory. So that's all the hardware sprawl that they have. So with our edge compute platform, we can really bring that together and make it more efficient to run their existing workloads. In a lot of cases are still VMs as well as help them modernize. And I'll also say with a lot of these existing workloads, they can completely cut out the VM stage and get directly to Kubernetes, which is another great part of it. So it's just interesting all of the different use cases to come together to solve for this for all of the different use cases in manufacturing. Great. Thank you. So I'm going to follow up with it. First of all, here you see another example of, you know, both image recognition as well as a IML, which are connected with each other. This is also related to the amount of data that is processed now at the end and how much of data growth is at the edge versus the cloud. And that's that's expected to be in many more, you know, multiple X times of what is on the cloud is would be on the edge in the future as by 2025. There are several numbers being run out there. And I would like some of the panelists to talk about that later. But Marilyn, if we think about this use case that you brought up, how much of ways did the company, you know, reduce. And one of the things why companies get into these kinds of things is the recognition by their customers. You know, folks are looking at ESG. They want to see, you know, how sustainable you are. So did this company get any recognition from from the generally their consumers as to how they were doing it? And, you know, what would you talk about these two aspects of the solution? Yeah, I can say they definitely, of course, could market that as an advantage to them. I don't think they've yet received any public recognition, but I mean, the overall for them is it's important. Like I said, statement to make. And I do think it's becoming much more critical around the world, especially with everything that's going on. And so it's it's been a good thing for them. What I wanted to say for them as well as they've, you think you wanted to know about me, particularly how much savings this is or what did you waste? Let's waste. Sorry, thank you. Triggered my memory. Yeah, they've definitely been able to see significant reductions. I can't share the exact numbers, but it's a double digit reduction as well as, you know, for the cost savings. You know, I can't share the exact number, but this definitely has been properly efficient use case that pays for itself. Got it. Got it. So first to, you know, talking about the Israeli defense force, you know, what, what do you see over there? You know, what are the new MLI processes that they did? What, what did value did they get, you know, without obviously giving up any state secrets over there? Yeah, one of those I could tell you, but somebody would probably kill us all. But no, seriously. Yeah, it's really important. It's interesting. We often talk about dynamics. Oh, it's, is it the cloud or is it the edge? This unit, they actually call their edge thing that they're called cloudlets. And that if you really think about just architecturally, it's all connected. So, you know, the typical model that we see is AI often I do my training in the cloud and I do my processing at the edge. So, you know, we've worked with a lot of the auto manufacturers. It's like, well, you know, if you bought a Tesla or some other car there, it's not going to create the model in your car, but you want the latest software push to you. And you need that processed immediately so that, you know, you're not going back to a central location. So it's the same thing in this force is it has autonomy, but it is still tied into in feeding information back to central it and that's really just a central tenet of, you know, constantly relearning and iterating on what they're doing from the field. Great. So the question I want to ask all three of you, I heard this from money for the first time a couple weeks ago was edge native. And everybody's now talking I cover cloud native and I first time heard, okay, edge native. Tell me about edge native, what are companies doing to go to edge native and what are each, you know, different types of examples of what do you think they're going to get as business value of now putting edge native as one of their priorities. So I'll just start with money and go this way. Thank you for attributing that name to it. The reason I called it out and I think it's everybody's familiar with what it is. First, the application attributes are different. Like what you have to design for like we're all maybe for 30 years, very good at streamlining it. And we had a it workload and it workflow well defined in our mind, you know, from virtual machines to Kubernetes to containers like they were like, oh, with virtual machines is a create blueprints. Oh, I know how to put together a e-commerce app. There's a web tier. There's a database. There's a half tier. It's a blueprint. I push it out. It comes to Kubernetes. I have a set of microservices. I can orchestrate them as an industry with streamlined these it workload and it workflows. We're all working to make them more agile and efficient and learning what we're discovering at the edge. Our OT workloads and OT workflows. You found each one of us talk about, you know, meat processing me talk about, you know, transport units and, you know, road units and defense. The workflows are very different. Therefore, the workloads are also very different. The workflows are, you know, is there more, you know, meet on the bone. For me, it's like, Hey, how do we traffic money? So the workflows we need to adapt to. So if you actually peel this back and look at the application model, your computer network storage requirements, you know, power efficiency. You don't care about optimizing your application for power efficiency in a data center or a cloud, because you assume there is unlimited power there. Whereas at the edge, you can't take that assumption. You're an oil rig. So your power consumption needs to be super low. So the type of attributes you have to take into consideration to write an application completely change as you try to write this for the edge. That's the reason to call out that, Hey, don't just try to take a, you know, a data center or a cloud like technology and apply it. You'll have to refactor it, you know, retool it to consider these new attributes and to find a better way. I call it edge native because I have a list of 14 attributes. If you guys want to follow up, which, by the way, I built that, you know, VMware because I've been looking at this for two years. And now it Intel, but those 14 attributes come from me engaging with about 100 plus, you know, companies globally and trying to write these edge native applications and discovering that what we've done for 20, 30 years is not going to cut it. Because the parameters that we have to zoom and write to are completely different. Got it. Yeah, Maryland about edge native. I know you have talked about it yesterday in your edge session. Yeah, I had my session with it in the audience here and I'll actually can touch upon a little bit about the data aspects about edge native. So many of course covered well here that all the attributes that of course he built for for us. But so I think it's one of the interesting stories is that everyone's telling the stories when I look at the marketing across all of our different companies here at the edge. We all want to we all want to tell the same story that you can have your applications anywhere. And, you know, anytime anywhere, right, you can place wherever you'd like. But the logistics of it I think is what we're really talking about is having a consistent operational model and management model across, you know, your data center, your public cloud, your edge. But in terms of actually deploying the applications, you want to, of course, build them in similar ways, but you have to take into account the different attributes of what it means to be edge native. And then of course, so something I just, I mean, I know we're talking about edge native, but how we handle data at the edge just needs to be handled differently. Like Stu said, some of it you want to just kill the data at the edge. But in the cases of these AIML models, like once you process the inferencing, you might get additional new bits of data that the model maybe didn't process. So we have to find efficient ways to do sending that training, those training bits back to wherever the central model is, which typically will still be in the data center or the public cloud. But then also to do the real time decision making, you need a stream or a messaging queue to put it on to be able to then process and give access to the other applications that are running at the edge to really tell the complete story. So you have to take into account not only the, you know, the refactoring, but how you handle the data at the edge to, like we said, the networking at the edge is just, it's a whole new model of how this all kind of comes together. And I'm really excited that, you know, over the past year or two that edge native now really is like a proper term of how to describe applications at the edge. So you want to add anything? Yeah, so I actually not heard the term edge native before, but, you know, when I think about, you know, the general trend, what have we been solving for the last couple of decades? We're really talking about distributed systems. So the edge has certain challenges. I talked, you know, what's your network connectivity, moody, you know, power and everything like that from a software standpoint, though, you know, there were certain things that Kubernetes made decisions when we built things that if you talk about the edge, it's like, well, Kubernetes is a little big and it's built for a lot of environments. So, you know, containers, phenomenal at the edge, but like all of Kubernetes, well, how much do I need? And, you know, you look at the hundreds of additional services and projects that are around Kubernetes at this show, you know, how much of that will play at the edge. So it's, you know, what's the same, what's different optimizing for it. So there's a lot of hard engineering work to adjust things. And so, you know, we have multiple products to fit, you know, just from, you know, if I'm doing containers with Linux, you know, we've got solutions there. We've got Kubernetes solutions and we've got some newer solutions we've been talking about this week. Great. If you haven't heard it, just blame, blame me for making it up. Okay, I'm going to ask Muneeb one question, you know, the example you gave, you know, how much of time did the drivers spend or save, you know, in this, in this example that you put together? Yeah, no, it's a, it's a great question. You know, I think we'd love to, you know, we'd love to save, you know, time for, you know, all of us to work life balance, but I think where, as I said to you, we're deploying for emergency services. So it would like to consider that, you know, ambulatory services we've reported, saving the five to 10 minutes saves lives. So the 10 ambulances in UK have already saved lives, because they've been able to. In the police case, the worst case scenario presented to us was a high speed, you know, car chase where they're all trying to, you know, coordinate among themselves. But then if you have AI with these units actually looking instead of, you know, following efficiently, a bad driver, you could actually, you know, AI will, you know, do the traffic management block it and actually give them a GPS signal to get ahead of them. So, but it's coordinating all the services. So saving lives, you know, incidents and accidents. And, you know, so I would say the measure of success is really, you know, I know we get excited about, you know, congestion that we all live through. But imagine the emergency services trying to get through that congestion. So, you know, all the places we're testing has been made human impact. Okay, and we will talk a little bit more about this, you know, next year, I think we're going to unveil a bigger thing at, you know, Mobile World Congress and we'll have our CEO Pat talk about all these tech for good projects which are saving lives around the world. Got it. Okay, I'm going to ask one last question and then open it to the, to the audience to see how many questions we have. But one of the things that comes up with edge is the integration of it and OT. And if I were a CEO or a CISO or, you know, a lot of those folks, they're going to say, Hey, I'm connecting these two, how many people are going to be able to hack into break into my systems now, which are crucially running my, you know, utilities or manufacturing or whatever. I would like each of you to just bring up some of the things that you would, from a business perspective, talk to these, you know, these stakeholders and say, Why would you do Edge when it is going to bring up, you know, IT and OT together? Who wants to go first? Okay, I'll go. Yeah, I mean, back when I was an analyst, we used to say, you know, Edge, you know, exponentially increased your surface area for attack. So that's challenging. And it's interesting. A lot of times we talk about like, Oh, it's an edge device or a few things. One of the things we're looking at, and especially in this community, what about at scale? So we've talked about, you know, auto manufacturers and their fleet. We had an announcement this week, we took a project that called MicroShift and we've productized it called Red Hat Device Edge. We have an announcement with Lockheed who talks about their drones. And if you have, you know, a drone that needs to fly for many hours and adjust and use the AIML technique while it's doing it. Obviously, they're concerned about, you know, vulnerabilities and what they're doing. And, you know, they might have thousands of these. And how do I manage that fleet? So it is something that ties into the broader discussion of containers and Kubernetes and what we're doing. And, you know, we've been, we've been looking at solving that, you know, in the cloud, in the data center and now at the edge, it's part of the overall solution. So, yeah, I mean, one of the, I think IT and OT, one of the biggest things is like the old OT world, you know, you install some manufacturing device and like nobody will touch it ever again. Well, look, I mean, in the IT world, a lot of times we install stuff and we don't want to update it. In the software world, we all know the best way to be the most secure is to be on the latest version of things because you'll have all your patches and you need to do it. So we're getting through some of those IT, OT challenges, but it is a big challenge and a lot of that is, you know, I'm a networking guy. So it's the, you know, the layer eight and nine, the people in the politics is where a lot of the challenges is. And the technology is helping enabling it. Our companies are all trying to help companies, you know, move forward and adopt those things. Got it. Thanks. Thanks. You want to go, Marilyn? Yeah, Marilyn go. What was I going to say? And I love this question. I feel I answered this question like we were earlier, we were just talking about, as we, you know, preparing for announcements and the shows and talking to analysts. This is definitely the question we always get. And so besides the people politics of the IT OT convergence from an actual technology perspective, like the video, the thing about VMware is that as, as we look at what products to optimize for the edge, you know, we do have a lot of good networking and security products that can help us in particular use cases to, to teach your customers a variety of ways that they can secure and segregate out the OT workloads if they need to, if that makes them feel more comfortable. While still providing the, the, all the advantages of being able to virtualize to make it easy to deploy to do all those bits for the OT workloads. I think as we, the overall advantages that you get with being able to do the virtualization and move to the modern apps outweighs perhaps some of the concerns of how they would do it. But then also the nice thing is with the, having the flexibility with the edge and the different ways that you can deploy it, you know, you don't have to just have one edge in manufacturing. You could have multiple edges of different sizes all working together. And therefore you can also then business logically separate things out if you'd like to. Or if you want to add in redundancy, I know at the edge for in many cases we're saying, you know, we want to make it easy so you don't, you shouldn't have to over, over size your edge for redundancy. But in cases of critical applications, we can work on making it easy to still, still be redundant, but still be in a compact way. Yeah, you know, I talked about your IT OT, you know, workloads and workflows. They're different. Therefore, how you secure them, absolutely right. The tax surface goes huge. There is a management plane issue about how do you distribute secure security to them. None of these edges potentially have large firewalls or things that you would. So you have to almost tie down the, you know, the root of trust down to the device itself. So you have to, you know, secure the devices and they have to do a lot of zero trust frameworks because it's truly zero trust because these assets are not sitting behind firewalls and, you know, things like that. And then the data center or cloud. Do you get to two concerns? I think there's one, the scale. I think Stu kind of pointed out to this because a challenge on management and pushing security policy is the scale. We're all used to and we'll say we are all like, you know, given the IT's world, if you have public clouds or data center solutions, you know, given my 10 years at VMware, I could probably count in one hand how many customers had more than 100 data centers. Not a lot. And then if you look at all our party, you know, VMware partners and, you know, everybody's partner, the hyperscalers, how many regions or zones that they have an average 150 200 max. So the technology that it has been built to scale for, you know, a few hundred locations, but hundreds of thousands of workloads. What you're dealing with here is hundreds of thousands of locations and very small amount of workloads. So again, that design framework needs to change on application distribution. Once you distribute the application and house the security, you know, posture, you're going to distribute to it. And the security posture needs to be validated through a root of trust, which is almost built into the device, because the device doesn't have a huge firewall in front of it. So again, how you write the application, how you secure it goes very quickly from an app to a hardware that's deployed in a root of trust buried in this look. Unfortunately, because there's not a lot, anything, no latitude in the middle for you to apply any types of security policy, right? So it's an interesting. So yes, the tax surface has gone high. How do you propagate a consistent security posture to tens of thousands of devices? And then how do you ground it in a root of trust, which is, you know, you don't have too much to work with becomes an interesting challenge. But again, as I said, edge native applications that the refactoring is to identify your root of trust, not somewhere in the cloud, because again, just to, you may have a lot of times disconnected operations. Right. So when it's disconnected, you contrast, you know, a certificate authority in the cloud, it has to be, you know, root of trust some things on the device. So it's again, your design principles change. I have a question on disconnected, which I'll leave to the end if we don't have enough questions from the audience, but I would like to open it up to the audience about, you know, you have any questions. Yeah, please go ahead. Yeah, right you. Yes. Okay. Okay, so let me just answer the OT question first. It's operational technology that is embedded into shop floor devices or in a utility company that is doing water flow. So that's the question was, you know, what is OT? Those are things that, you know, are physical in nature, which will open valves, close valves, start a press, stop a press in a shop floor. And when we put edge solutions, we are bringing these two together. I will let Stu repeat your question and answer that question. Sure. I think if I summarize your question, it's, okay, Kubernetes or how do I, how do I choose? How do I think about the architecture at the edge? And it's actually been something we've been debating at this conference for, I think, at least four years. So there's, you know, K3S and K0S and MicroKates and everything like that. So Red Hat participated in a number of those. We created a project called MicroShift because a lot of it is, you know, thinking back to what Manib talked about, there's differences in architecture. If I, if I'm thinking about, you know, deploying in a traditional enterprise data center or I'm thinking about deploying in a hyperscaler, the edge has very different needs. And some of the things that we think we need by default, I don't need. And then there's other things that I do need that I didn't need in that environment. So we spent a bunch of time, we've worked in the community on that project and that's, we've recently productized that piece. But it's, you know, we have a full spectrum of if I just want to start with containers because, I mean, look, Kubernetes is awesome. But, you know, if I just need a container, I don't need an orchestrator for that. So, you know, when do I go from just doing some, you know, a handful of containers to when do I do something that that's closer to what we're doing in Kubernetes? And when am I doing Kubernetes? So we, we, we spent a lot of time bringing Kubernetes as small as possible, and try to make it as operationally the same so that you could have a very much a consistent experience between the Kubernetes environment and some of your other Kubernetes environment and the nice thing underneath, we have, you know, a common Linux platform, obviously. And, you know, other two panelists. Yeah, no, just to add to it, right? So there's no, and it almost reminds me, I'm, I'll show my age now I'm kind of Linux kernel contributor back in the 90s. And, you know, when we wrote the Linux kernel was a big, then you had to do our real time OS, right? So we had to carve out a whole heap and throw it away. But you can go out there and see there's real time OS of different flavors. So what I'm getting at is, basically a use case, what goes into the components will vary. Back to your first question about operational technology, the workflows are not consistent in a meat factory to a defense to a, you know, so the workflows are very different in these because you're dealing with real life situations, which are analog signals, you're trying to convert into digital signals, right? So what's more important for you is to understand the modularity and be able to use the right set of modules to get the outcome at the edge. So I think all of us are focused on the modularity of what that real time OS, and it used to be real time too, by the way, right here, if you're in a traffic signal and you don't cut off the green to red in that fraction of time, then there's going to be accidents, right? So the real time aspect of super critical. So what you pick and choose as the ingredients of that will be based on what type of outcome and workflow you're going to solve for. I was going to comment with what Steve, what you said the fact that I mean the beauty of Kubernetes is that you can mix and match all the different packages for the different areas and that that is the massive advantage of having Kubernetes. So that's something that we don't want to be as we build for the edge to prescriptive, but we do have to start somewhere because then if you think of it, it's something that keeps me up at night worrying. In fact, I was talking about it earlier this week for like, how do we pick, how do we decide the beauty of it, everything that we're doing at VMware for our edge optimizations across the different packages were of course contributing back. We're listening to our customers to see what's going to be, you know, the most useful and I think we're kind of just basing our decisions and but allowing to be as flexible as possible. And then the customer can make those design choices based on the use cases because maybe they'll use a lot of stuff that's that job to my and maybe one package that won't be and we'll just have to account for the extra space or computer or the different needs. Okay, so I do see a big sign saying stop over there. So unfortunately you can ask the panelists up to the end of the session if you don't mind. One last thing I just wanted to say when you talked about disconnected environment say suggest all of you look at private 5g and see how that can help edge on the, you know, on on a disconnected environment with that please give a hand to our panelists. Thank you. Thank you for having