 Thank you, Arpit. Good morning, everyone. I was thinking, why don't we play Jeopardy or the Family Feud? I mean, I tell you what, Heather really set the bar for moderators. So, Heather, wherever you are out there. Give all of our answers and questions. That's right. And I'm going to limit you to 60 seconds. What is the edge? Come with us. Well, great. Well, let's get into it. So, yeah, what is the edge? So, it's the Wild West out there, right? You know, from my perspective, lots of whitewashing, lots of terminology, you know, from my perspective, lots of folks kind of trying to do things on their own, you know, and it's driving fragmentation. And Jason, we've spoken about this, and you definitely have some opinion there. Let's start with you and get your insight. Yeah, it's fine. So, when we draw edge-to-cloud architectures, we always put it in portrait mode, because it's easier to fit in a PowerPoint. And literally, the edge is on the left all the time. It's the Wild West. The West side and the East side is kind of usually the cloud and all that, more civilized, and they're longer, centralized data centers. So, a key point, there's no single edge. And this is part of the confusion. I mean, it depends on kind of who you are, how you define it. The way I define edge computing, it's moving compute as close as both necessary and feasible to subscribers or devices. If I'm a telco, I'm going to move compute as close as I need to, to reduce latency for subscribers, reduce bandwidth congestion on my own networks, so that could be my cloud edge, or maybe I push it into CPE equipment on-premise, because I have to do that. So, that's the other thing, is they're kind of associated with locations, but that's even that's an organic boundary. If I'm an operations person, I'm necessarily going to move compute on-prem. If I'm a nuclear plant, I'm not going to do the cloud. Right. And stuff like that. So, this spectrum of edges. A lot of the fragmentation is due to inherent complexities. As you go from cloud to device edge, software and hardware always gets more customized and complex. So, you can kind of envision this curve where like the cloud data centers, IT standardization has happened over the years. It's relatively standard, I'm not to say it's trivialized, but as you get further towards the device edge, the wild west, hardware starts to get more complex first. You know, I need elevated temperature support, and I need regularization, then I need all kinds of IO, then I need class one div two for explosion proof, then I need custom hardware for every single thing on the planet, like in terms of connected products. You know, car is a little bit different than a toaster. And software goes a little bit slightly. Software goes flat a little bit longer, you get into everyone wants a different flavor of OS, and then you get into all the crazy protocols, and then there's a point where you hit this trade-off at memory constraint, and this is what we call the thin compute edge, where you can no longer do virtualization or containerization. You want to extend cloud native principles as close from the cloud all the way down to that last point where you now have to go embedded because of memory. And so we'll talk more about this as we go, but there's these inherent trade-offs. Part of the reason why there's so much fragmentation historically in the edge is because the models for a long time was I'm going to lock you in with my proprietary protocol. I would argue that IoT is more about, IoT and Edge is more about the maker of movement than anything, the maker of movement, Kickstarter, Indiegogo, just Raspberry Pi, Arduino, whatever, that ecosystem has driven people to change. I can no longer lock you in because someone's going to out-innovate you faster. So now it's about how fast can you innovate, and you win by merit, not lock-in. And so in IoT, you've seen all these platforms pop up. Everyone thinks that if I can just lock you in, then I can sell your data if you let me. The reality is, and this is why we're all collaborating in All of Edge and broader communities, is open always wins in the end for scale, always. And if you want to really scale, you need a multi-cloud strategy that starts with an open edge. And so this is all the stuff that we're working on, and I mean, there's so many different things we could talk about, but you know. Well, you hit the nail on the head, right? Proprietary, and that's the power of open source, is really opening up. Melissa, let's get your insight there as well. Well, I think part of what has been also really tremendous in the shift that's happening, you know, Jason kind of talked about what's the movements that are moving up the stack. I think we've also seen the cloud technologies and the infrastructure software that's been well-established and is becoming, you know, more and more productized and implemented in common ways. And I think Dan will probably talk a little bit about this and his keynote as well, are coming down the stack. But then when you complement that with the diversity of workloads coming up the stack, you know, for all of these different use cases, and then you introduce AI and FPGA and GPU, the complexity of what workload needs to go where, physically where, and how much can I put at that workload? You know, and then how do I load balance and make sure that I've got the right resources, physical resources with regard to, you know, AI, neural networks, GPU, et cetera, where it becomes really, really complex. And I think that's one of the things that I hear from our partners a lot is help me figure this out. I get why Open's gonna win. But holy heck, it's really complicated. And how does Open do that? How does Open Source make it easier to manage that complexity? Well, I think part of what you see in the dialogue at sessions like this at ONS and other Open Source gatherings is a conversation around what is the architecture. And by being open, you get the best of the community's mind and you are able to leverage the best of the intelligence of where technology is headed. And you're also able to leverage a pool of resources. It's not just me doing my development on my own or anything. And so we can actually, if we can agree, I think it takes a little longer to get to common consensus on what that should be. But once we get to a common consensus, we can go much faster. And then you also have the shared maintenance burden because everybody's not only contributing to the creation but contributing to the maintenance of it as it moves forward. Excellent, excellent. Eric, your thoughts? Yes, I'm falling onto what Melissa said. I mean, I think that the horizontal nature breaking up the vertical integration so that you reduce friction and making it be easier to go deploy things, I think that that's one of the things that we bring to the table. This is what's been done in the data center is what the whole sort of how the open source stacks have evolved over time. And the same thing needs to happen on the edge and it's going to happen on the edge. We just try and accelerate it. But in terms of what is the edge? Well, it seems like if you read in the trade press it's anything that isn't in the cloud. OK, that's not necessarily a very useful definition. What I'm trying to do is tease apart what are the edge unique requirements and they're different for different parts of the edge or different edges in the system. So one thing that to me is interesting is this very sort of deep edge or enterprise edge. In many cases, it's things that are deployed sort of 1C2Cs, a server sitting in the ceiling over there managing the AC or whatever, right? Something that does some video analytics, vibration analytics, next to a generator or something. And how do you actually manage that? It's very different than what people have done with your laptop and even with your phone because the assumption is that you don't want to send a human being out there to update the software because it's too expensive. Because this thing is you need a ladder to whatever, a crane to get up there. And some more else, you need to drive for a couple of hours to get there. So what can you actually do from a software perspective to go address those unique things, whether it's about physical security, about the access to the things? And as Jason said, the different IO requirements you have at that edge, right? Dealing with legacy, serial ports, random radio technologies. So that's a very diverse thing, but how can you make that stuff be more accessible so that it's easier to deploy applications out there? And what's exciting about, from my perspective, the LF Edge Initiative is unifying these domains and reducing that complexity and that fragmentation when you look at telco, IoT, enterprise, and cloud. And there may be some disagreement here on the panel, but as an analyst, when I sort of look at those domains, I think telco has one of the biggest challenges when you look at latency, location, and mobility issues there. So, Eric, let's start with you. Are there any best practices or learnings that the telco domain can take from those other domains? Well, I mean, I think that sort of building on the sort of horizontal layers. I mean, one thing working up from the hardware is looking at what is actually common in terms of getting hardware root of trust, being able to have some notions of measured boot in the space where you can actually take and boot something, typically Linux, but maybe it's a real-time OS or something else that's going to run out there, and then build up from there by providing common connectivity for your applications. The applications, cloud-native applications, they assume that, oh, what I get looks like an ethernet. Well, but you're running a RLT, right? Because you're sitting out there. So does the application need to know about that? Well, it shouldn't have to, right? You should be able to abstract that thing away. So that notion of inserting a layer of virtualization is something that we're working on in Project Eve and as part of LF Edge, so. Jason, your insights into the different projects that are also driving that momentum? Well, I'll start on the telco side. I mean, so just like many folks, like many companies over the past couple of years, IoT Edge, whatever, like, everyone's trying to lock everybody in again, because it's like, if I can just lock you in, then I can make money on your data, if you let me. Yeah, historically, that's that's not true. That's historically, but the reality is it's just like trying to own the internet. So what anyone, telcos or otherwise, we look at it as you must open it up and then there's use technology to bring checks back from strangers. This is how it's also making a scale. So there's a lot of stuff that we're working on. You'll hear about that, this notion of inserting trust at a system level so that you can cross between systems of systems. So step one, though, is like solve the insanity around everybody reinventing the middle. Yeah. You know, with IoT specifically. So one of the first projects, well, a project that I helped to get started with a great team at Dell and it became a Linux Foundation project with, you know, EdgeX Foundry, and one of the anchor projects along with the Crano and there's a bunch of others, Eve and others coming in. We'll talk about EdgeX was, hey, let's go extend cloud native principles, you know, down to the thin compute edge, so to speak, you know, gateways is one way to look at it. Let's do it in a way where you then open up an API in the middle and if you get enough folks using something through collaboration, then that becomes a de facto standard. And so if you have this open API that allows you to replace components, whether it's device connectors, and I don't care what protocol you speak because you'll never have one protocol, southbound especially, there's thousands of protocols southbound in IoT. Why? Because of the lock in place. Everyone created protocol so that it was really hard to switch from my control system. Well, now everyone's moving towards, it's about software and services, not about the lock in, you need to democratize the South so you can monetize the North. Right. And so EdgeX. That agility, yeah. Yeah, EdgeX is all about like, hey, I don't care what hardware you use, what OS you use, platform independent, cloud native, it's loosely coupled microservices. You take your pick a protocol, you plug it into the open API and then everything can be heterogeneous around it. And so that's what EdgeX was all about and is all about. And so we're seeing a lot of pickup in that community and there's... So good stuff happening there. Give me that decoupling point, the moment data is created and then I can send data wherever I want. The cloud strategy would normally be, my multi-cloud strategy of, like I'm a big cloud is like, send me all of your data and then you can send it anywhere you'd like with if you pay me a lot of money. The EdgeX strategy is decouple your data, the moment it's created and then you pick any permutation from edge to cloud to all permutations work. And then you can kind of transport stuff left and right across the chain. The other one I've mentioned, so we've been talking about earlier this morning, like the glossary project, very important. You mentioned the terminology, there's a lot of buzzword bingo. Yeah, like fog and foggy. The fog is basically all the edge. Well, fog is foggy, that's why we didn't really subscribe to that term, but like fog is basically... I can't see. The fog is everything that's not in the cloud, same thing, all of the edges plus the networks in between or whatever. But when I... So it was really important, there's a project as computing glossary is like basically how do we get aligned on terminology as an industry? It's semantics to some degree, but here's some examples. So first off, depends on who you are, but telco world, it's near edge, far edge. But what's near and what's far? These are loaded terms, it's like ever seen Sesame Street, Oscar the Grouch, they're far. Like what do you mean? It's better to talk in absolutes. People say real time, way too much. Real time to a building automation person is 15 minutes. Real time to an airbag is like a fraction of a second and it must be deterministic. Real time to a financial person is milliseconds, but no one dies if you don't get your start. In a 5G world, it's less than 5 millisecond latency. So we need to talk about absolutes, tiny. I hear all the time data center people say tiny, it's tiny, it's 500 megabytes of footprint. The world that Eric lives in, it's five kilobytes, some cases. So we have to get aligned on absolutes that are descriptive around inherent trade-offs. And that's where I think, there's some general terminology and then there's like let's not use loaded terms, let's use absolutes. And it matters because I think like every conversation I'm in, we spend the first 15 minutes of the conversation just getting grounded on what we're talking about. Potato potato. You're like, get over this, right? I think another thing that's another lesson learned that is an area where LF edge is investing and it's really important is, if we think back to what happened with the cloud transformation and telcos and cloud operators and tier twos, they all implemented common open, many of them implemented common open source projects for their cloud infrastructure. But because they were standing them up as things were being developed, each one of them did it slightly differently. Open stack running, Kubernetes running, Open stack, all of them container, cloud native. And so now all of those different companies are spending enormous resources maintaining what is essentially is open source that became proprietary because of the way that they integrated, it was unique to their particular business. And so taking the lesson learned from that and really being focused on, hey, if the vision of edge is to be realized while I'm still working, we need to stop kind of the fractured nature by which these open source projects get consumed. And we need to work as a community, not only in their building of these components, but in the way they get integrated to realize actual use cases. And so there's a project as part of LF edge called a crano where that's exactly what the project is about. It's the community coming together with specific use cases in mind and saying, okay, for this specific use case, I would like to architect this with these components, these assets and integrated in a completely declarative fashion, this hardware, this software, this release of this software, et cetera. This is how you integrate it. This is how you render it. This is how we validated it. A lot of the open source projects don't actually validate the functionality that they attest to. They'll have a feature and they release it, but how is it validated and what kind of test cases, what kind of bandwidth constraints, et cetera. And so a crano is really trying to say, hey, let's simplify this. Nobody's gonna differentiate on infrastructure software, right? It's all gonna be on the app services, business models running on top. How do we accelerate the infrastructure to make this possible and do so in a way that the whole community benefits for now and for the long term? And I think that's really, really compelling. It is a different type of open source project in the sense that it's not as much of a development project. There are some development aspects, but it's really folks coming together to do the dirty work of making this stuff productizable. And I think that that's really a tremendous effort. No, I'm not an engineer, but I like to play one on TV. Shouldn't that be the model for all open source projects to drive adoption and drive scale? I think it should be. And I think we're kind of at the cusp of seeing some of this. I think OpenStack's done some of this. The Cloud Native Compute Foundation's doing some of this. But what I think is differentiated about a crano is it's actually pulling from all the upstream projects to render these use cases as complete vertical stacks and reference stacks for the community. And so I think we're in a pioneering kind of step forward when the context of open source. And we'll see if it works. There are definitely skeptics. Awesome, awesome. Well, we've got some time left, so I want to throw a bonus question to the team here. Our pit stood up and dropped the mic when ONS got kicked off and talked about how the impact of the edge potentially could be a forex that of what we've seen with Cloud. Cloud's been pretty accepted. So for me, that was pretty mind blowing. So I'd love to get each of your insight on how you see that happening. I mean, are there certain linchpins that'll occur? That'll really drive that. So Melissa, maybe start with you. So I apologize. I wasn't listening to your question. I was thinking about the fact that I failed to mention a couple things when I was talking about it. I got totally passionate about a crano. And I failed to mention the fact that there are a couple other projects as part of LF Edge, including the ones that our pit mentioned yesterday. When you said that, I was like, oh, shoot. Go for it. Didn't say anything about those. Anyway, so there are some new projects that were part of LF Edge that were announced. Badal and Fledge were also, there's seven projects total. As part of LF Edge, you've heard a couple mentioned here. There's also a laundry list of new projects that are being contributed to LF Edge. So there's a lot of opportunity for folks to get engaged. And I think the reason that there's so much interest from the ecosystem and from the participants in LF Edge and these different new projects is the diversity of use cases. So while there is infrastructure software that are common components that can be leveraged, some of the challenges that exist when the context of edge technologies are unique and they're different and they require different software and different capabilities and components. And so I'd encourage you to check out some of the new projects that were announced. There's a ton of documentation on the LF Edge website, the wiki.lfedge.org. So it's very easy to get informed in true open source fashion, everything's open. So please leverage what's available to get more informed about these different projects. Thanks. And then the key, of course, is within LF Edge, all Linux Foundation projects is the governance. I mean, that vendor neutral governance, the whole point of an umbrella project as probably many people know, cloud native compute, LF networking, there's a bunch of different, the model's been proven time and time again. It's bring it, be inclusive, bring projects in even if there's some overlap up front but then let the community, working with that structured governance, the technical advisory committee and all that help to harmonize these projects over time. That's a key part of that mission is that nobody wins, as Melissa said so well, kind of reinventing the middle. Someone told me once, open source is all about minimizing undifferentiated heavy lifting. Okay. This is important. I've been thinking about that one for a while. Yeah, but to answer your question on the cloud. So there's a lot of stuff. All data is created at the edge and there's no doubt that there's just more and more devices popping up on networks and this is going to cause a lot of congestion and a lot of need to kind of shift compute. You know, is the cloud going away? No, there's a lot of click bait. Oh, the cloud is going to disappear. No. Will the cloud always do the deepest of deep learning? Yeah. Are you going to see more learning happening at the edges? I call that shallow learning. You know, closer to the edges? Yes. You're going to see, the point is that we don't know where things are going to run in the end. We haven't even scratched the surface yet on the business models and the use cases and get out of just like, POC, party of one, now try to scale that solution. Now try to do that solution that intersects with other domains. That's the real potential and this is why you need to open. You can't cross public and private and all these different folks without trust at a system-wide level. So that's the big next conversation. The point is if you architect properly, even if your right answer day one is to send all your data to some cloud and do stuff centralized, great, but you're not going to want to stay there because you're going to get the bill. It's going to be a hybrid model. You're going to want a hybrid model and so over time, you need to be able to have that, you know, transportability of workloads anywhere along that continuum. And that means that you right now have to architect properly and this is why things like an crano and stuff with Eve and we're doing, I mean, the home edge is another project within LF edge of what's, even in the home, you're going to start seeing folks, you know, your gateway looks more like a server, hosting all the different services, you know, and all of these trends, all these abstract, everything you can, abstract, you know, compute storage, networking, always separate the data from the underlying infrastructure. In the end, it's about consistent infrastructure with domain knowledge applied. The winners will have the best algorithm, have the best specialty hardware, you know, specialty software. No one wins reinventing the middle. Right. So the short answer is like, you're going to see a massive, you know, change across the board. The last thing I was saying when I, you know, give time to Eric is, I get asked all the time, isn't 5G just going to make edge computing, you know, a short term, live thing? Right, yeah. Yeah, latency, you know, very low, but number one, I don't care how many nines you have on reliability, I'm not going to let you to deploy my ear bag from the cloud. Yeah. Right. Not going to happen. And number two, bandwidth always comes with a cost. Right. So the analogy I use is if you build freeways, people think, oh, solve the traffic problem by building freeways. Well, what happens? More people move to town, suburbs, and then you have more traffic. Yeah. Everything raises up, but new experiences is going to be incredible, like all this cool stuff, but it's not going to solve the bandwidth problem. Right. And economics drive it as well, you know, license versus unlicensed. Spectrum wouldn't go all day. Eric, why don't you close us out, you know, any additional thoughts here? Yes, I mean, if you look at where the edge is at, I mean, yes, all the people that are analyzing this stuff are saying that the amount of data that's generated, the edge is growing, and it's going to grow, and it's going to take over the amount of data that actually makes it to the cloud, just in terms of amount of data. If you look at actually deployed infrastructure, you can even argue that the edge is already here and already bigger than the cloud. It's just not connected yet, right? If you walk into a factory floor, you will find industrial PCs sitting in every machine on the floor, right? They might be running some app, running Windows or whatever, and hopefully an old version of it, but hopefully not connected to the network because it probably hasn't been passed in a while, right? Since they installed the machine. Windows on fire. But people want to get that data out, right? And how can they get the data out? Well, maybe they deploy a separate, you know, industrial PC next to it that runs something more modern, software-wise, not Windows 98 or whatever, and then build it up. But I think the opportunity is there, and being able to gather that data, do a bunch of the computing locally, and then saying, okay, what do I need to export from here to the cloud? How do I build the system? And making that easy to leverage from the perspective of people developing applications as well as the people that want to deploy the network services that can actually connect that stuff together. So I think that that's the key thing. Excellent, excellent. Well, Eric, Jason, Melissa, thank you so much. Great insights. Enjoy the rest of the event. Yep, thank you. Thank you. Thank you. All right. Look at that. Thank you.