 Thank you for joining us today for this session, Demystifying the Edge with the new LF Edge Taxonomy and Framework. I'm Molly Wojcik and I chair the State of the Edge Landscape Working Group as part of the LF Edge Foundation. And I'm also the Marketing Director at Section, which is an edge compute platform provider. And I'm joined today by Vikram Swatch and Jason Shepard. Gentlemen, would you like to introduce yourselves? Sure. Hi, everybody. I work as a product manager for Mobilegex. I deal with most of the infrastructure in the company. And I chair Technical Architecture Working Group here, which helps define and was one of the outcomes of that group's work was this paper which we released. I'm excited to be here today. Thank you. And Jason Shepard, I sit on the LF Edge board and I've been involved a variety of different aspects of LF Edge, helped work on the white paper with Vikram. And I also lead our ecosystem efforts at a company called Zedita, both in the open source realm, working with, of course, LF Edge and then also in the commercial sense. Excellent. So maybe just to kick us off, we can for those not as familiar with LF Edge, just give a high level overview of the organization. So LF Edge is an umbrella organization that aims to establish an open, interoperable framework for edge computing. And it's really about accelerating adoption and the pace of innovation for edge computing through cooperation. And so that's kind of the basis for, excuse me, this taxonomy work that has been done. And so Jason, I'm going to open it up with the big meaty question of how do you define edge computing? Yeah, that was a good question. So it means a lot of different things, a lot of different people. Obviously, any kind of emerging trend, there's just a lot of definitions. I always say the old standards joke is, how do we solve the standards problem? We're going to come up with a new standard. And kind of the same thing with taxonomy is, how do we solve the taxonomy problem? Come up with a new taxonomy. Simple terms, edge computing, as I would see it, it's moving compute as close as both necessary and feasible to subscribers that need it. So the necessary and feasible are key. So necessarily I need to move compute closer for reasons of latency, bandwidth, et cetera. Feasible is, if you're serving autonomous vehicles, people, et cetera, obviously heavy compute, there's only so far you can go, maybe to the base of a tower. But if you're doing things, say in a factory floor, you're going to increasingly see that compute, it's feasible to deploy more heavy compute on that floor. So those two criteria, I think kind of cover it, but there's a lot of considerations within that. Yeah, certainly. And Vikram, given the confusion out there around definitions, tell me about the recent LF edge taxonomy paper and how it came to be. Yeah, it's actually based on the diverse communications we are seeing. I mean, as Jason was describing, edge computing, the nearness purpose of why we want to distribute this, right? So fundamentally, if you're going to distribute cloud, do the in-service providers are in user's edge, the market focus is on it. So for somebody who is developing an IoT application, he's only concerned about how does he actually make it work on a factory floor, or something like that. So the conversation is siloed in there, right? But if you look at some service providers and they will define their network accordingly and they'll say, near, far, take, then, and different terms will pop up. The ideology behind edge computing is going to be derived on cloud-native technology. There's not going to be something new, right? So how do we take some of these principles and try to define some common language around how to actually can divide this scheme of edge, right? And then what can we do to help solve 14 different standards defining the same thing, right? In terms of what market really needs, right? And when you boil down or crystallize that conversation from market perspective, it becomes much easier because then you can see commonalities across these models, whether people are using containers, whether they're using cloud-native technologies, those principles still apply. And that's how we try to address them in a language we will understand, right? In a more easier way than complicating it with more standards, right? So that was the effort which drove this exercise from our side. Yeah, I mean, you know, and I would add it's, it's, you know, it is a continuum as Vikram describes and a lot of folks are approaching it from the market that they serve, you know, whether they're a service provider or a telco, you know, someone that is building edge compute to run on-premise, you know, whether it's a gateway or it could be sensors or whatever. But since it's a continuum, you know, the whole point is to extend, you know, these cloud-native principles as far as you can down into the various different edge locations but also recognize inherent trade-offs. So when we've seen a lot of taxonomies out there that say, you know, near and far, if and then thick as Vikram alluded to, it's confusing because they're ambiguous. You know, what does that mean? They're loaded terms as I would call them. And so the idea behind the white papers we worked as a community was, how do we base a taxonomy on inherent trade-offs, you know, versus loaded terms? And so that's kind of how it came to be. Yeah, that's great. And loaded terms and also relative terms depending on as you were saying, which, you know, area of where you lie on that edge continuum. Yeah, I see people all the time talking about real time, but it's very, very subjective, you know, real time in building automation is 15 minutes, real time for your airbag and your car, a little different. Right, yeah, it comes down to that latency-sensitive versus latency-critical. Right. And so can you describe the overall taxonomy in the paper? Vikram, go ahead and maybe. Oh, yeah, yeah, yeah, okay. Y'all talk about just sort of a high level. You know, so this is a view from the paper. I highly recommend, you know, going to download the paper if you just type in LFEDG taxonomy, it should be the first hit. And, you know, as we've been saying, it's a continuum. And the way we broke it down and the paper goes into great depth is as you go from centralized data centers to constrained devices, there's a couple key paradigm shifts in terms of these trade-offs. So obviously we've got, you know, massively scalable centralized data centers, there's tens that will matter long-term in that sense, you know, big public clouds and the like. You get on the left side and there's eventually trillions of constrained devices out in the field. So the scale factor is exponential as you go from right to left. You know, on the right side, we've got centralized services, they tend to be shared across, you know, many users. And then of course the division is this last mile network. So that's that point where you switch from a wide area network to a local area network. And so that's a key delineator when we talk about is something latency sensitive or is it latency critical? You know, latency critical is, you know, things go boom if something bad happens, you know, you've got something, you know, someone gets hurt. It's a safety standpoint. You lose, you know, massive amounts of production so dollars are impacted in, you know, immediately. So that's this type of stuff. We're gonna see more and more processing moving, you know, kind of into the user edge on-prem. Whereas if it's latency sensitive, but I need to provide a better experience, then you're gonna see a lot of aggregation happening, you know, say at the server's provider edge, right? I bring that process closer, but people can still, and people can benefit from that better experience. And, you know, I'm sure Vikram can dive in, you know, deeper there. The user edge, there's one big delinear that we talk about in the paper where the on-prem data center edge, that's very similar tools as you would see in the centralized data centers. We're seeing evolution into, you know, Kubernetes, for example, taking the world by storm, we're seeing evolution of Kubernetes moving left. Things like K3S, you know, just this week was, you know, announced K0S and we're seeing a lot of innovation happening with Kubernetes. But they're generally the same tools. So the next big inflection point is between the on-prem data center edge and what we've called the smart device edge, here's where you're no longer in a physically secure data center. So now you have unique considerations around how do you manage distributed devices? How do you secure them when people can access them? And so that's, you know, first as wide area network to local area network, then it's the security considerations of being in a physically secure data center or not. That smart device edge is starting to get into the wild west where you've got all kinds of different, you know, devices, many, many different types of form factors. That smart device edge, it's both mobile devices, you know, PCs, tablets, you know, mobile phones, et cetera. Very well solved ecosystems, you know, driven by Windows and, you know, Android and iOS, et cetera. And then you also have IoT component, which is very, very fragmented, you know, gateways, you know, hubs, routers, servers. So you'll see a lot of the projects are focused on how do we create more of an open foundation for that type of workload. But those devices are still capable of running apps. So think of smart device edge as devices from at the lower extreme, they have enough memory to run apps, at the upper extreme, you know, it's a small server cluster deployed outside of a data center. So there's a spectrum, all of these, you know, elements have a spectrum. And then you get below the smart device edge and now you have constrained devices and the delineation there is that there's so resource constrained that the stacks tend to be more, they're embedded, they tend to be more developed specifically for a given device, a specific type of silicon, you know, so the fragmentation gets even higher. And so it's, there's different paradigms, they're very similar principles as you move left, but they're necessarily different paradigms because of these trade-offs. Yeah, so maybe Vikram, you can, given your focus at Mobile Edge X, you can tell us a bit more about the service provider edge and key considerations there. Sure. So if we look at today, service providers typically for last so many years I've provided a bit pipe, right? I mean, I don't like to use that term because they've just decreased their value, but at the end of it, they are the ones who are carrying the traffic across to these end devices which we use often, right? And the problem was mostly like serving content, right? So if you see the design principle where let's say Google or Netflix have this huge amount of content, right? Which is running through these pipes, they tried developing caching technologies which was based on the user consumption from their end users. So they could figure out, hey, I think in this region I need a cache, I probably need to download this content in the midnight so that my users can use it, right? And those principles will hold on true in the next generation too because that's the principle of how you manage limited resources, whether it's caching or whether it's network. But then there's other trend and service providers which was not visible for quite some time which is the interactions between the devices and the back ends is becoming more and more immersive. We talked about downstream, but there's an upstream trend which is coming in where people need, we talked about latency sensitive but we fear playing a game and the augmented reality application. These things become more and more data intensive just to fall, right? And also latency sensitive, right? And those are typical problems of privacy and security, you know, definitely encroach and so if you actually are running a robot in your factory floor and these scanning images, that's a highly sensitive data to your company, right? You probably do not want it to be exposed into a public domain of a different country. So things like that of that nature where robots and automation is happening, how do you build a service provider, right? Which is no longer a byte pipe, it's actually can do a meaningful role there, right? If it as when we look at them in the paper, we identify a typical deployment model which works in harmony with the cloud because it's not that people will just shift their working model and cloud and say, yeah, I'm gonna develop for edge. That's not how it probably would work. So in this continuum which Jason described if I look at the last bullet there, what are we carrying over is the learnings from last 10 years of building stuff on applications on cloud. Can we import those principles today? How a container is built, how it's run, it's pretty simple, right? People understand that technology. Can I use the same way of actually offering them edge, right? And if we look at those technologies which are people are getting more used to as the time passes, there needs to be some way for people to first orchestrate these things, in a meaningful way, right? And then to discover them, right? So then the quality of experience constructs to become even more important when we say something is near, does it necessarily mean it's better? Probably not, right? So then the discovery based on a quality of experience happening at the end device becomes more fundamental, right? For service providers own edge locations, right? Plus also offering them as a viable entity to the market. So we discussed a little bit about how the typical deployment should work in the paper. And then we discussed about these design principles which will exist. And then we talk about a little bit how discovery works and what APIs would look like in that fashion, right? Towards the end, we also talk a little bit go deeper in inside the mobility principles because these devices may or may not be static as we know we all move around. So how do these things actually work? Because today, relatively when I connect to a Netflix cache, it's sitting somewhere in my East Coast, right? Where I live, and I download the movie, right? And I watch it. But like say I'm moving around and I have an immersive experience going on which is a little insensitive too. How would I move this container around in these geographies where telco's edges is? Because it'll be much more distributed than what a Netflix cache is today, right? So those things will be becoming more and more material. So these new things will evolve, but nothing radical will change. We'll keep on learning from the design principles which have been very successful and passed and keep on evolving that structure, right? So that's what we basically discuss in the service provider section. And some of the work we're doing at Mobiology is essentially that, right? We're trying to help boot up that space for a meaningful consumption, right? Yeah, so that's a good lead into the next question here which is, you know, LF Edge has a number of member projects within it that span this continuum. So Jason, can you maybe help us understand how the projects fit into this continuum? Yeah, so, you know, the focus of the projects tend to kind of lay out across the landscape and there's currently nine projects within LF Edge. You know, LF Edge, as you described, you know, it's about how do we harmonize a more of a common open foundation for edge computing and how do we get to where we can make these workloads more transportable across, you know, that spectrum. And so if you look at the projects, Acreno was one of the anchor projects. Acreno is a little different because it's about building blueprints on how do you take both upstream and downstream projects? Of course, you know, the ones within LF Edge as well and create, you know, best practices around deployment for, you know, aligned with different use cases, different kind of horizontal and vertical use cases. So Acreno sort of spans the spectrum. And then you'll also notice there's a variety of projects that tend to align to some of the nuances of, you know, the user edge, for example, and they span from infrastructure up to application layer. So there's that kind of common, you know, foundation that we're looking to build and then there's tools around how do we enable developers to build applications more effectively and you know, have more of that common framework and then they can focus on innovation rather than reinventing the base. And so if you look at the infrastructure side, Secure Device Onboard is a new project contributed by Intel. That's like, that's all about how do I simplify bootstrapping devices out in the field and how do I assign owners to devices at the end of a multi-party supply channel? And so SCO is, you know, all around, you know, enabling zero touch provisioning in a complex supply channel. It's based off of what's now a fifth standard from a security, you know, that security organization. And so the idea is provide code that makes it easy to go, you know, embed into your products and enable secure onboarding within the channel. You know, because if a company ships a product it's gonna go through multiple different hands before it gets to an end user and making sure that you bind that identity is important. You know, Eve as a project is around how do I build a secure OS for edge computing when I'm not in the side of a data center? So that's kind of going along that inflection point between stuff that's in a secure data center versus stuff in the wild. It's focused on IoT edge computing. You know, think of Eve is due for the IoT edge compute nodes what Android did for mobile. And so create more of a universal open OS and a lot of different features within that. Open horizon is about how do I go for the haters, you know, anywhere and you know, both on the, from a controller standpoint and you know, an agent that enables that, you know, Beatles got, you know, some of that element but then also how do I tie that into cloud services? You know, that was initially contributed by, by due. Home edge is, you know, package all this up and let me position that into the home into consumer applications. You know, how do I create like a home server that delivers different experiences around segregating video within the home, you know, AI for natural language processing, et cetera, you know, tying all these services together and then Fledge and edgex are all about the application plane. How do I create all, merge all of this fragmented data from devices out in the field, you know, the closer you further you get to the left on this equation, the more you get into many, many different protocols for connectivity. There's 10s that matter on the right, there's thousands, you know, on the left when you get into proprietary. And so both Fledge and edgex are focused on, you know, that bridge between, you know, IOT data and creating more, you know, normalized data on the top end, you know, to feed into other services, feed into the service provider edge. You know, all of the projects have kind of a core focus. There is some overlap between the projects, but the whole point between LF edge is bring, be inclusive, bring projects in and seek to harmonize over time. You know, look where it matters to, you know, to harmonize and make more common foundations, but there's also necessary trade-offs. I mean, you know, Fledges is focused on, you know, use cases is very focused on industrials, not to say that edgex is more general purpose, also focused on industrial, but there's necessary trade-offs in terms of high performance, which is a bit more on the Fledge focus, high streaming data, but then you lose some of the portability, some of the flexibility in terms of architecture. Edgex is focused on high flexibility, you know, very, very granular microservices and they can kind of mix and match based on need. And so that's, there's just different approaches across the projects and, you know, we definitely, you think it's important to come at it from different angles and see how things can bridge over time. So the other thing that I would mention, I know this is near and dear to your heart is State of the Edge. Maybe it'd be good for you to kind of talk a little bit about, you know, that project and the landscape. Yeah, sure. I actually think we have a slide, yeah, maybe the next one. So as I mentioned in, at the beginning, I chair the State of the Edge Landscape Working Group and this is a project for anyone familiar with the CNCF landscape, which most people I think in this audience will be. This kind of echoes the efforts there and focused on the Edge ecosystem. So this is a database driven framework where we position relevant projects, technologies, operators within the Edge ecosystem and it's an ongoing effort. If you'd like to get involved, we can provide the resources for that as a follow-up through LF Edge. But yeah, this is, it is, as you said, near and dear to my heart and would love some more contribution stuff. So maybe moving on, Vikram, you know, based on where you guys are currently operating, what you see, where these projects fit into the continuum, what are you seeing as the major trends in Edge computing? So this is a broad question, Molly, but I can answer some of the questions. What do we see coming through these are the use cases, right? And what people are investing money in? So from our perspective, we see a lot of things are driven now based on automation structures, based on ML driven structures. So you will see that trend continue to grow. It is something which is natural for people to optimize around thanks to pandemic, now those digitization efforts have accelerated even harder because people cannot be together. So people are doing this at a rapid pace. The need to upload all this content and consume it in a closed loop is actually now big driver for most of the things we are seeing in the commercial sites, right? And if you look at any package with 5G and emerging connectivity technologies, I think it becomes a powerful story that you could actually offer somebody the privacy they need, the latency sensitivity they have and get the burden of installing the gas servers in their facility to actually moving nearby somewhere, right? Whether it's a Google cache or whether it's somebody else's cache, it's a material, but it'll save them a lot of headache because now they have a secure line to connect to those servers, right? And it's actually providing them the same latency which they will probably go to the factory for. But the consumption is actually driven by the application space, right? So it's actually truly about what particular use cases will hit the market first. And when we see that need and the digitization need the automation structures, we are seeing ML pipelines being more and more robust, right? That distribution is putting robust in our scheme. Also the gaming industry has come along a long way because people are playing more games now because they're at home and they want immersive experiences and they are sitting in their living room. And that demands something, even on the fixed side, right? Something which is nearby to serve that uplink content in a way that's reactive, right? So you see a lot of the accelerated devices on these things, right? Which kind of consumer hardware, it's very hard to buy, right? But the subscription models are evolving where people just buy that like Netflix content, right? And it served our Google cache or somebody else. So we would see more and more of these trends and more and more investments coming in from service provider side to actually take on more and more slice of that value chain whether it's gaming or whether it's a Nepal pipeline from an industry. So they'll become part of that value chain system which they are not, right? So that's what we are seeing at least from our perspective. We would love to hear more because we're not in every business but like what people are seeing on the user side and device side. And we would like people to actually tell us more about what they would like to see in a paper as well as reflecting that trend, right? So maybe Jason, I'll throw that same question to you. Where do you see the edge heading? Yeah, so I mean, first it's clearing up the confusion. I mean, that's what we're of course, you're working to do with taxonomy and get people to think about it more holistically and rather than this kind of singular vision from the way that you're approaching it. You know, we wanna see this transportability of applications. You know, I think we're gonna see more and more of that over time as people realize the different paradigms. Advanced class to me is doing the analytics of the analytics so that you actually kind of look at based on where I'm running a workload, you know, am I getting the best cost to performance benefit or do I need to run? Maybe I'd move it up or down the stack a little bit. We're gonna see more and more compute happening in constrained devices. So tiny ML is a major trend that's popping up. I mean, the conversation around, you know, tiny ML has really picked up over the, even the past nine months, if you look at stuff online, I mean, obviously it's been around longer than that, but this is about how do I embed, you know, very, very optimized machine learning models in constrained devices. Tend to be fixed function. The Hey Alexa for your smart speaker is a great example of that. It's not connected to the internet until it, you know, here's that wake word. But so we're gonna see more there. So it's, again, it's about that continuum. You know, a lot of the IT players, you know, come from that background when they look at it, all compute's gonna happen in the data center on heavy servers and all that. It's like, no, no, it's actually gonna be happening, you know, more and more across the spectrum. So I think that's a big trend. When it comes to AI, so Vikram, you talked about, you know, AI and machine learning and some of those trends, anything related to computer vision, we're gonna see more and more happening at the user edge, you know, on-prem. You know, of course, when you're in a vehicle or, you know, with a mobile device, inherently it needs to hit the service provider edge. But if you have lots of streaming data, whether it's video vibration analysis, so high bandwidth vibration data, that's gonna want to tend to move closer and closer to the source because it, and then you run your models there and do analytics and then only push events to the backend. It's very offensive to move streaming data around if you don't need to. So that kind of gets into the, you know, that definition of edge necessary and feasible. So we're gonna see those trends. The other big thing that I think we're gonna see is, you know, with, you know, I mean, 5G is such a broad topic, but this notion of private 5G and CBRS and sort of new bands opening, I think not only is 5G gonna enable new experiences, but we're gonna see a lot of business model changes. You know, the ability to kind of build your own networks, drive new experiences around it will change, you know, kind of who's doing the work, who's offering that business beyond kind of a traditional telco. Obviously it's a big opportunity for telcos as well, but it's gonna change like the landscape of who's delivering networks. And then that goes beyond just kind of the IT sense, but also into people that are used to the industrial world. You're seeing more and more industrial providers starting to either build their own networks or starting to partner with telcos because they bring the expertise out in the field. You know, how do I not optimize a network for a factory floor? You know, someone that knows how to do traditional and networking doesn't necessarily know how to optimize it for, you know, process automation or things like that. So we're just gonna see in terms of general trends, not only workloads and creative distributed, but more collaboration across different people in the ecosystem by necessity because we're just seeing this mashup of services and applications and whatnot. So I think that's gonna be a key part of it. And also, you know, a perfect example of why we need more of an open common foundation, why you need transparency in that way. Because it's just gonna get more and more complex. The real long-term is starting interconnecting different ecosystems and driving entirely new business models, and that's not gonna happen without some sort of open foundation. Right, right. And I know the community is planning to do a follow-on white paper. How does one provide feedback or get involved in that project? Vikram, you wanna take that one? Sure, so just to finish off what Jason said, as you saw, like on the user edge, he was also saying there would be a lot of ML and a lot of transactions like that and there would be privacy-centric moments where the streaming is so high that they probably don't want to use a cloud. And that's where I think the beauty of this paper sits and that's where the collaboration sits in. Whether the inference transactions are gonna happen on an end device or a training is gonna happen in a private colo, these are the things which will evolve into the next phase. But if you look at the basis of these things and if we agree on basic terminology and technology for that matter, if it's just gonna be a container who's gonna run an inference engine on the device, it's the same technology which is gonna be a training container running in the back end somewhere. So then the life becomes much simpler for people at least like us who can collaborate across somebody supplying an IoT ML, a use case. How do we take that ML use case and do the useful work which is needed on a service provider edge rather than actually trying to reinvent the same things across these verticals. So just to wrap things what Jason had said, I think it is very important, the same drivers which we are seeing in the market for ML, for tiny ML, or whether it's a tensor for transactions, the same drivers I think Jason is seeing in his market. And if you see that common pattern, I think all we need is harmonization around that. So the best way for us to work together is participating in these conversations which we write. We don't want to write only our opinions on these papers, we want to collect from the market, what do they really think they need to write. And a lot of companies are developing solutions around each of these four categories which we highlighted and the interleaving is happening. So my guess is if we see the next phase of this paper developing, you would like more participation, there's a tech architecture working group which everybody can sign up, right? It's awesome, it's actually mentioned there. So if you go below this link, you can actually subscribe to our weekly meetings. And then we actually set up an agenda and we deliver a paper in few months, right? So which comprehensively addresses all these things which we are talking about. But now I think we want to move on from addressing edge continuum, basic terminology to actually solving some of the use cases and how we actually interleave across these layers of edge continuum we are talking about, right? And that's what we would love to see how is an output from this paper. And if somebody has any thoughts, opinions, please join this working group and we'll be more than happy to work together. Yeah, that's great. And maybe Jason, any other closing thoughts there? Yeah, I know, I think it's all about getting the community consensus. Obviously we wanted to establish a baseline with this taxonomy first, kind of frames the conversation. But as Vikram said, the next version of the paper which we're spinning up now is getting into more, a bit more of the technical side of things, more of the architecture, more on the use case side and some of the unique trade offs as you span this continuum. And we're just looking for feedback. So I think that this, getting on the same page in terms of terminology, the work that you're doing in state of the edge, important. We're definitely seeing it improve. The next big thing is of course use cases, practical application. We don't want a bunch of solutions looking for problems. All new market trends tend to start with people talking about the technology, start talking about, hey, buy my platform. What am I supposed to do with it? Well, I don't know, just buy my platform. And when you know when something starts to mature, when people start getting into real use cases and practical implementation of it. So that's the goal here is just to demystify things even further and just get people involved in participating. So definitely encourage that across the community and the market. Yeah, certainly. Well, thank you both for the overview and the insights today. I think that wraps up our session for today. Again, if you're looking to get involved with the next phase of the next white paper coming out, the link is listed there on that slide. And thanks again to everyone for joining. And I hope you have a great day and a great conference. Thanks, Molly. Thank you. Thanks, Margaret.