 All right. Good morning, good evening, good afternoon, wherever you are. My name is Jason Shepard. I am the VP of Ecosystem at a startup called Zedita. We actually have a project within LF Edge under the Linux Foundation umbrella. Our goal is to build the Android of the IoT Edge, and I'll talk about that as we go within that project. And then I'm also a board member for LF Edge, which, like Cloud Native Compute Foundation, is a collection of complementary projects. And our whole goal as an overall organization is to harmonize across these open source projects, you know, necessarily different focus areas across this edge continuum. And I'd also highlight to stay tuned for an Edge white paper that's coming out soon from that community that we participated in with the community, and so it really helps to clarify things. So we'll actually do a little start by doing a little preview of that from that white paper. So a lot of people out there are talking about, you know, Edge computing, you know, there's some, you know, clickbait of like, oh, Edge is going to replace the cloud. No, it's a continuum. And so we're going to see, you know, the spectrum of compute across the landscape from sort of the constrained devices that live in the physical world, you know, where people are in operations and processes and vehicles and you name it machines, to, of course, centralized data centers in the cloud. So this diagram is from the LF Edge white paper. And it really kind of talks through what this continuum looks like. And, you know, it introduces concepts that are absolute inherent technical tradeoffs, inherent logistical tradeoffs, versus using these loaded terms as I call them, which is very common in the industry today. And this is why there's so much confusion about Edge computing. Today, the, you know, you've got people saying, oh, it's near Edge and far Edge. That's a common explanation in the telco world. It's thin and thick compute. So kind of like a gateway versus like a server on-prem. And, you know, that's fine. We're always better off to have very clear terms. And of course, this is just, you know, language and whatnot. And what matters is that technology is in the outcomes that we're delivering for, and customers, and enabling for open source collaboration for those foundations. But it helps to get on the same page. And so this continuum, if you look at the right side, you've got the centralized data centers in the cloud, you know, massively scalable. You know, for the big public clouds, there's tens, and you call it hundreds of those kind of resource pools. You've got the internet Edge, and then you're seeing more and more compute moving left into, to be closer to the devices and users that need those resources. You know, we define Edge computing as moving compute both as close, or as close as both necessary and feasible to the resources that need it. In a perfect world, we put everything in the cloud, and you get the benefits of centralization and the scale factor and all that. But there's a variety of factors of why we're seeing a need for more and more compute closer to the physical world. You've got, you know, certainly regional Edge providers that are providing, you know, CBN networks, content delivery networks, and even the clouds are moving on the other side of the internet Edge to run data centers here to get a hop closer. We're seeing telcos, wire line operators, service providers in general, creating more compute, you know, literally a modular data center at the bottom of a cell tower to feed up stuff. And to users, like, you know, online video streaming is an example of that, so that we reduce the latency and congestion on the upstream networks. So you just see the spectrum and we'll get into kind of the user edge in a second. But at a top level, the service provider Edge and the user Edge are the big categories that we have. And you notice that there's a bleed here with this angled line. If I'm a service provider and I run equipment on-prem, like in the form of customer premise equipment, CPE equipment, then it bleeds into the user edge. And I push that last mile network boundary even further into the on-prem scenario. And if I'm a user, you know, enterprise or, you know, in this case, if I'm an enterprise end user and I run a private data center up here on the right, then I'm basically bleeding into that right side. And the important delineator between these buckets on top is this last mile network. This is your cell connection, you know, satellite, if you're out in the oil well or mine somewhere, you know, out in the ocean, it could be wired connectivity, et cetera. The reason why, as a community, we define this as a top-level delineation in the taxonomy is that there are certain things you just will never do over a wider network, even if your network is super reliable and super fast, like, you know, say it's a 5G network. If you are doing latency-critical applications, meaning bad things happen if the message doesn't get there in time or the action isn't taken, like, you know, deploying the airbag of your car, you know, applying the brakes, you will never do that over that last mile network. Anything that's latency-critical will always be done, you know, locally. Anything that's latency-sensitive, you tend to want to push upstream to take more advantage of the economies of scale from these other edges. So the paper that will come out next week goes through great detail in how this breaks down, but the reason why I'm running through it makes it really clear where Eve plays, you know, what we're doing. Once I get to the user edge, now I'm on the other side of the last mile network, everything just continues to get more and more complex, more and more custom software, more choices for protocols and hardware and OS and, you know, form factor types and unique certifications and everything. It's pretty standardized infrastructure on the right. Not that it's trivial, you know, there's a lot of great innovation. Of course, Kubernetes has taken the world by storm on the right and it's kind of moving more and more left, including into on-prem data centers. So that first tier, sub-tier under the user edge, these are data centers that have been on-prem for a long time. And, you know, within that context, I mean, there's moderately scalable compute. You're limited by the footprint, you know, available in real estate. You're limited by power and cooling and all that. Not going to be as scalable as the kind of the nearly infinite scalable in the centralized cloud, especially with the public clouds, but it's still, you know, good level compute on-prem on this side of the last mile network. So your latency issues are less of a variable, you know, of course, 5G, there's a lot of great things that are going to happen there. Big difference, though, between this and the other edges is that it's still in a physically secure data center. I'm locked up, you know, whether I'm in a traditional data center or in a micro data center, these are secure locations. The next tier over what we've called the smart device edge in this taxonomy, this includes both IoT type devices, you know, gateways or even a server on a factory floor or out in an oil rig somewhere. Headless IoT components. And then you, of course, have PCs and smartphones and client devices in general and tablets, whatever. PCs and smartphones, very well-established ecosystem. Of course, we've got Android and iOS in the smartphone world. You know, these devices in the smart device edge, anything, whether it's IoT or client side, two characteristics that make it the smart device edge. Number one, I am outside of a physically secure data center. I could be in a closet, so semi-secure. I could be just sitting, you know, out in the wild in your pocket. You know, in the case of a phone on a IoT gateway sitting, you know, in a building somewhere, like in the closet or, you know, out in the open. They're physically accessible. So now you have to have unique security needs that you're not necessarily considering around, you know, the data center. You have, you need to have like a zero trust model to where you trust nobody. So you start to, you know, build trust relationships with various different entities. The scale factor is getting bigger. You need to have things that run autonomously, regardless of whether they're connected to the Internet or not. You know, you could have a gateway and an oil rig or some sort of server on an oil rig that you lose this connection for days, if not a week. And it needs to run autonomously while it's, and then whenever it can phone back home the central controller, it gets any updates it needs to have. So just different needs, you know, at this smart device edge and, you know, IoT components tend to be upload-centric. I'm getting data from the physical world. I'm doing some preprocessing here at this edge and I'm moving it up for further processing upstream. And then client devices, you know, like phones and TVs and PCs, they tend to be download-centric. I'm downloading content. You know, of course, if I'm doing cloud-based gaming, I'm kind of, I need latency that's low either way. So, you know, it's just outside of a physically secure data center but still capable of running apps and abstraction like virtualization or containerization. Once I get to the lower side of the smart device edge, I get into the constrained device edge. Now, this is basically, I'm out in the wild, I'm microcontroller-based and I'm so constrained that I can't run apps anymore. I'm doing embedded software. You know, this is over-the-air update tools that are almost custom for every piece of silicon. You know, we've got great efforts out there, like what Azure, you know, Microsoft is doing with Azure Sphere OS, you know, trying to, you know, build, like, the more of a stable ecosystem for microcontroller-based devices. You've got folks, you know, like, founders.io that's doing that in the open source sense and there's a lot of, you know, cool stuff happening but it's really, really messy here. But if that smart device edge, and of course on the further ones up, you can create abstraction with a small tax. You can create abstraction with things more platform-independent. So, you know, I know this is a lot of setup but really important stuff and again, the paper gets into great detail. So think of Project EVE that we'll switch focus to, you know, this is, you know, do for the IoT edge as part of that smart device edge, what Android did for the mobile component of that smart device edge. Very fragmented. You need to be able to support all of these different technologies. We want to create within this project, a universal abstraction engine for this type of edge compute hardware. I'm not going to say it's just a gateway. It could be a gateway, it could be a server, it could be a hub, a router, any kind of hardware that's capable of supporting, you know, an abstraction layer. And then, of course, on top of that, you run apps in the form of VMs and containers. You can think of this spectrum as 256 megs of memory or so, you know, that's what we aspire to. Below that, you're starting to get so constrained to embed it, and devices over here could have kilobytes of memory, you know, for sure. 256 megs on a single node of memory, you know, give or take maybe 512, up to a small server cluster. Now we have, you know, Kubernetes is coming from down here, down this way. There's great work happening with K3S out in the market. As a project, we started with the kind of sweet spot of the IoT component of the smart device edge, you know, Project EVE, and we're bridging up to the Kubernetes space. Now, very important, too, is to not look at it as Kubernetes, you know, is going to carte blanche, just get dropped on these more constrained devices outside of a physical data center. What we'll see is a subset of the key capabilities that make it attractive, like clustering and stuff like that, but you're not necessarily going to see a one-for-one relationship. There are just different needs across this continuum, and it's about finding the right balance. And so, again, check out the white paper. It's really, really good read, and it really breaks this down into inherent trade-offs that you can't change versus using loaded terms. You know, I joke that when, you know, near and far is being used a lot. I mean, it's fine, and we talk about that. We kind of map those terms. So, near and far, if you've ever seen the Sesame Street, you know, with Grover, and he does like near and far, you know, I kind of joke that these loaded terms are just confusing and whatnot. So, anyway, so that's where we play. We play at the IoT component of the smart device edge. You do for IoT what Android did for mobile. So, a big part of LF Edge, this community of projects, I think we're up to nine projects, you know, now. And a big part of this is all about how do I create various different levels of abstraction within the landscape? You know, obviously open source, in general, is about a shared technology investment. You know, we've partnered with Linux Foundation for a long time on various things. We've helped with a team at a project called Edge X Foundry started with it, which is also now within LF Edge. You know, it just hit five million downloads, and we started with a blank sheet of paper in 2015. And that's just an example of the power of open source. Of course, Kubernetes has taken the war by storm. It's just super awesome stuff there, and we look forward to bridging, you know, Eve to that story with the right considerations. LF Edge as a whole is a collection of projects that are all about shared technology investment, minimize for end users, developers, et cetera. You know, the undifferentiated heavy lifting that's happening out in the market today. So we can all focus on value. You know, drive interoperability, drive these various different layers of abstraction. I'll talk about how this is important to get to the real-time potential. Just because you can build stuff on your own doesn't mean you should. You need to be part of a bigger open ecosystem, and you need to be able to have choice in this market. And so these are principles for IoT Edge Scale. I want to abstract all these different layers of technology and services and smarts, your domain knowledge from the underlying infrastructure. You know, I always joke, like when's the last time your ERP system manages your PCs? You don't do that. But in, you know, the IoT wave that came from 2014-ish, you know, till now and continues to go, but now we're talking Edge and, of course, AI and 5G and all these different tool sets. You know, of course, it's about customer outcomes first and foremost, but what we saw in the market, you know, even when I've been looking at this for five years, I've been doing this for a while, initially what happens in any new market is it's people that have built, like, a platform of some sort and attached it to some sort of domain knowledge gets that initial traction. I know more than you'd ever need to know about cold chain, you know, grocery store monitoring or monitoring wellheads out in the wild or whatever. And they built technology for the sake of just getting to data and providing an outcome for a customer, but they didn't really need to. What they need is domain knowledge and necessarily unique hardware and software applied to consistent infrastructure. And so as things evolve in IoT and Edge and digital transformation in general, what we need is a roster of pure play solutions that plug into more of a consistent infrastructure. Imagine, you know, if one company owned the Internet, you know, probably wouldn't have worked out the same as the massive innovation that's happened since the Internet really came alive. So you want tools that are running on top of consistent infrastructure. Of course, Eve is about providing that for the IoT Edge, as explained in that previous slide. In the end, the winners will be those that leverage, you know, shared technology investment for the base. Open source is becoming rapidly, the has become, I'd say, the way to drive, you know, de facto standards in the industry compared to traditional standards bodies, you know, interoperability, all that kind of stuff that LF Edge is working on. The winners will be people with the best services, the best algorithms, the best, you know, necessarily unique hardware and apps, domain knowledge, not the 100th person this week to reinvent the middle. That is not valuable. And so this is a big part of LF Edge. And so the second key principle is that you want to untether, you know, not have a hard coupling between your data and any backend service that says cloud services here, but it could just as well be on-prem system. If you take control of your data, the moment it's created in the physical world, and all data pretty much originates from people or devices in the physical world, if I decouple that from any given backend service, the moment it's created, and this is the point of EdgeX Foundry, of course, there's Fledge with an LF Edge and there's a lot of other projects. You know, Eve is about that rule number one, kind of abstracting infrastructure from the apps. If you make that uncoupling the moment it's created, then all permutations along that continuum from Edge to cloud work. You can choose to pump all your data to one public cloud. Great, they're doing awesome stuff. If that's not right for you, or inevitably you're going to need a multi-tenant Edge where you send data to multiple clouds from your data source. You maybe have different parts suppliers heading the same factory floor. You've got a service provider coming in to work with you in your building automation sense. All permutations work and you don't get locked in at that point. So this is an abstraction, you know, with some of the projects with an LF Edge that we're creating both interoperability for this and that abstraction to prevent lock-in. You know, I liken it to, you know, don't use the email address from the internet provider you have. Use something that's abstracted like, you know, Gmail or something like that, you know, any kind of abstracted emails. So you feel like you can make a switch on the internet if you need to, but just, you know, decoupling that Edge from the cloud. The first one's decoupling layers in the stack. This one's decoupling the Edge from any given cloud and then all permutations work. The last one is about this notion of how you architect for cloud native. So cloud native is more about how you build and deploy software, not where it's run, you know. So cloud native doesn't mean it only happens in the cloud. You know, we talk about this in the paper from the LF Edge community again, look for it next week, but with the taxonomy, you want to extend this notion of these principles. So platform independent software, you can run on any hardware, whether it's ARM or X86 based and various different capabilities across that spectrum. So of course, you know, what Eve is about creating that abstraction for that. I want to plan on continuous integration, you know, and delivery. So CI CD, you know, I need to recognize the technical trade-offs, you know, that I elaborated on in detail on that first slide, you know, more and more constrained hardware the closer I get to the physical world. I've got time critical applications. If you need, if you're time critical and you need, you know, a real-time operating system and it's embedded, you're not going to necessarily be running, you know, modularized apps on top, but you could be running next to something that is where you consolidate on common infrastructure. So we're seeing, you know, a trend there where you see kind of like soft PLCs programmable logic controllers running next to analytics apps, but that's going to transform over time. You'll see this sort of physical to virtual split between the operations technology environments, you know, that have been running for a long time, but not connected to the back end. And you, of course, the IT services that do things around the business and, you know, kind of keep things running, you know, from a data security standpoint. There is a definite evolution happening today between OT and IT operations technology and information technology. It's not to say either group does, you know, only does what, you know, one thing versus the other, but there are different needs. If there is a issue from a security standpoint or just downtime in general, there's immediate loss of production and could even be, you know, safety issue in the OT world. Meanwhile, in the IT world, if there's immediate, if there's a loss or a hack, you can, generally speaking, just shut down the IT network and sorry you can't get to, you know, your email or whatever. So you have to recognize these trade-offs. Most, many OT folks that have been running factories for a long time or any kind of process in the physical world, you know, the catch-22 of industrial IT is to connect those systems in a very segmented way. You're not going to connect your controllers right up to the cloud. You'll have network-layered layers on top, but you punch through these different systems to get data to back-end solutions, whether they're on-prem or in the cloud, and you also recognize the fact that someone might not be ready for continuous delivery of software to the factory floor, you're above those control systems today, but if you architect for it today, you're ready when you have to as people start to innovate around you. So we think it's really important to invest in solutions to kind of leverage these principles across the board. And again, LF Edge as an open-source organization, the umbrella within Linux Foundation, you know, it's driving towards this. You know, us as a private company and other companies within the landscape are working on these types of things. You know, we think that it's super important to have, like, this open ecosystem. And so that's just the basics. You know, just to get started, you know, don't get locked in, get on this path towards more of an open but where does this head, you know, over time? So, you know, the market is just a crazy number of platforms and everyone's trying to build these silos. And great, you know, we want to start small. Let's not get crazy here. But long-term, the real potential in digital transformation is this notion of interconnecting ecosystems. You know, I've been blogging about this for a long time. You know, of course, I believe in Open. I've been working in EdgeX Foundry with a bunch of great people in the open-source community for a long time. And of course, part of the reason I joined Zdita is because this notion of, you know, another layer below, you know, like an EdgeX or a Fledge of abstraction. If you want to get to the real potential, and this is B to X to X, whether it's B to B to C, B to B, whatever, across heterogeneous ecosystems, I'm connecting between, you know, supply chain, you know, in a grocery store and the end user in their house. I want to bring insurance services into the home, like Snapshot for the home, like they do for your car. I want to do, you know, retailers. I want to enable retailers to come into the home, you know, to kind of get on a level playing field across the board. You know, any of these things, you know, businesses, you know, across the supply chain, I have multiple stakeholders. If you really want to scale this, you have to have an open foundation across all of these different, you know, forums. And for that, you must have, you know, this kind of openness and trust factor. So LF Edge is building this. There's some new efforts, trust over IP foundation with Linux foundation. I helped launch, kind of pre-launch, a project called Alvarium that we're going to be working on this stuff. You know, this is about creating competence and data as it flows across heterogeneous networks where you can kind of really develop new business models over time. granted, let's start with a small use case and whether it's IoT or otherwise apply some AI model or whatever, get started. But don't forget about the real potential because when you're cutting costs, there's a lower limit to how far you can go. When you're making new money across all of these different domains, the sky is the limit. So super important to be thinking today about how you architect. And again, like I said, imagine if just a couple of companies if not one on the internet, it would not work. We need to make it democratized and build trust in through collaboration and to do trust. Of course, there's root of trust at the hardware level which Eve builds in. We've got a bunch of different capabilities that we'll talk about. But then you've also got things like, of course, ledger technologies, renewable storage, confidential computing. There's all kinds of trust insertion technologies that you can use. And that's what Project Overium and some other initiatives at Trust Over IP Foundation is about kind of vetting. And it's just about industry collaboration on that base, building that trust foundation. And then let's go figure out how we make money around those interconnected ecosystems. So super important to be thinking about this, even if you want to start small. I joke, Holy Grail of Digital is selling stuff to strangers. I've written about this at length, you online. We've got to scale the Grail. Start small and scale the Grail is one of my taglines. So very important to think about long term. Okay, so I've already kind of explained what Eve is a bit, but again, think of Project Eve as due for that IoT edge, what Android did for mobile. So we want to make it easy to orchestrate these devices. We want to provide a security layer so it's bare metal, that lowest layer. This is all about distributed hardware out in the field. It's outside of a physically secured data center. I don't necessarily own the network. I can't trust that I have a firewall. So I need a zero trust model. I have a distributed firewall capability where I can tell, say, this app can only talk to that cloud and this app can only talk to that app on that box, et cetera. I need to support both virtual machines and containers and we're exploring stuff with the Unicernals, but lots of legacy stuff out there that's running still on older flavors of windows, maybe a SCADA system, a point of sale app in a retail store. I need to be able to kind of run that stuff. This is not about dumping all your existing capital expenses or expenditures, but all the modern stuff, it's not like you wouldn't use VMs too, but most many things now are in containers and then of course Kubernetes has taken the world by storm. So you need to be able to run existing apps and VMs. You need to run containers. You need to be able to have those security elements and we'll talk through that in more detail. And you also need, for all the reasons I've been talking about, complete choice over hardware that you run on applications and clouds, backends, whatever, all the abstraction layers. So Eva's that lower level abstraction layer and then you choose the applications you want to run on top and the clouds you want to connect to, you choose the hardware that you run on, whether it's ARM or X86 based. And ultimately Eva's really just as much as like an Edge X, it's about those open APIs that are shaped through that open source collaboration and those APIs work with the controller of your choice. And as a data, we have our own controller. We welcome anybody to go build one. We have a simple open source controller in the market or in the project and we're actually collaborating with some other projects within LF Edge that are somewhat competitive on the back end, but we're all better off if we focus on creating that universal foundation that helps to abstract the complexity and help with vendor lock-in. So there's a page called Eva on Market at the project site. You just type in project B of LF Edge. You'll find it. And it lists what hardware we're supporting. We're working with a bunch of different hardware providers out there. And then of course, as more and more controllers get added, that'll be there. But the point here is that that open API between whatever controller, whether it's on-prem or in the cloud and project B is that lower level abstraction layer sitting on top of whatever hardware you choose is that that open API is the insurance policy that you are not going to get locked into a particular controller. And then parallel to that orchestration plane, you choose the apps and clouds that you want. Eve, from the project standpoint, in our own personal company, we believe it's very important to not try to be in the data path. You're in control of your data. And of course, you want to abstract that as close to the edge as possible. So all permutations work. So that's kind of like, what is project Eve? Again, do for IoT what Android did for mobile. Layers of abstraction, support VMs and containers make it very easy to deploy. So as you get outside of data centers and you have less IT skillsets, people that are used to working with servers and virtualization and all that, you need to make it very easy to onboard devices. So literally with Eve, as you put it out there, you could install it on a box. You ship it to a location along with some sensors. The person who installs it, connects to the network, scans the QR code, and then it logs it onto the controller or whatever choice that you're using. And Eve doesn't do some of those value add. We think that's where the community comes into play. The value add of the apps and all that. That's the zero-touch easy button. But Eve has the hooks to enable that stuff, which is super important when you have a variety of skillsets out in the field that are doing field repair and upgrades and deploying new services. ARM x86, we support various different co-processors. So if you're using GPUs, FBGAs, TPUs, very important here is that there's a lot of tools out there that are coming from various different providers and you just want to have something that has a universal story so that you're not locked into any given choice. And again, I mentioned that we're striving to make it with a meaningful app running on top, down to a single node with 256 megs of memory. We're comfortable at a gig today, maybe 512, and it of course depends on what you're running on top. Raspberry Pi 4 comes with these eight gigs of memory, so things are changing. We know some people that want to run lighter memory, but there's a tax that you pay for the abstraction, which is super valuable because it simplifies things. And of course, we need to have the concept of a failover. There's a blue and green partition, so as you're updating something, it runs autonomously with the current setup. And only after you've proven that it's running after a certain amount of time, that it'll switch over and there's also multiple copies of e-burning on a given box. So it's all about uptime, which is super important in the OT world. And really it's like this IoT edge needs a standardized foundation, of course Android in the open source sense, for orchestration. It's flexible, open and agnostic. One stack that you need that takes into all the accounts that are those challenges that we talked about at length in that first slide, but done for the IoT edge. And so that's what Eve's about, and we've been seeing good growth in the community. So just to make sure that it's clear on these architectural approaches. So if you take Eve on the left, then we'll compare it to some various different choices. So proprietary bare metal, there's a number of solutions out there, good solutions, but when it's bare metal, you get benefits because you're tightly coupled to the hardware around security. You can tie into root of trust, you can tie into stuff around measure boot, and out of station, and kind of grounds up security functions. You're sitting between the OSs and the apps, whether they're VMs or containerized, and the hardware so you can literally turn on and off ports and do distributive firewall type stuff. So it's great, but the proprietary versions cause lock-in to that person's controller, that company's controller. And so if that's fine for you, cool, in this world, because of all the reasons mentioned before, it's really important to have those abstraction points. Most of the solutions out there commercially or even open source tend to kind of fall into the camp of modern, we support containers or we only support VMs. Generally there's much more container focus in the open source world these days with Kubernetes and beyond, just Docker and the like. And then of course there's data center solutions out there that are proprietary that are great offers, but they're just not built for the IoT edge. The constrained nature of it, this idea that I don't have physical security, I don't have network security, I don't have a pipe. Most IT data center solutions pre-suppose that you have a constant connection to your controller. In the IoT edge or even the constrained device edges, the smart device edge and the constrained device edge, you quite likely do not have a constant connection. So you need to have what we call an eventual consistency model. So you set the state that you want in the controller, whether it's on-prem or in the back end, that communicates with Eve, and even able device phones home. It says, what am I supposed to have? What versions of everything? You work to get up to date. It does it in a separate partition. And then once you've checked everything, then it switches over. And if it doesn't succeed in that update, it keeps running the way it was because you have to focus on uptime. So anyway, there's some unique things for the IoT edge and, of course, I think that openness really matters. And then you've got an agent-based solution. So the bottom right. So this is another approach that we see commonly out there in the market. So this is an agent that connects to the controller that has APIs. And if it's open source, you could have an open API or you could have a close if it's proprietary. It just depends. But that agent then communicates through the OS into the hardware. And the challenge with this is that if you have an agent without deep integration into that OS and hardening of that OS, you might have security gaps. You don't get those benefits of the bare metal foundation like Eve has or even the proprietary ones have in terms of security and networking functions. If you don't do that integration between the agent and the OS, you could likely break that device out in the field. And that's bad news if you're out in the middle of nowhere and you're rolling a truck out there to update that device. You just spent $1,000 to roll that truck out. We have customers in the commercial sense that are doing wind turbines. You could spend $150,000 to $100,000 to get it for the day to get a helicopter out there and all the logistics associated with that. So you do not want to break devices out in the field. These solutions tend to only support containers. If it's an OS, if it's an abstraction layer, of course you've got solutions out there that support VMs, proprietary locks people in. But I would actually say that an agent-based solution could run on Eve by leveraging the APIs that are brought up through Eve. And if you do tight integration between that agent and the OS, well, guess what? You just built Project Eve. And so that's why we think this is the best blend for kind of enabling edge computing at the IoT edge. Completely open API, vendor neutral governance through the Linux Foundation. We think that's important. There's open source all over the place. But if you don't have that neutral governance, you cannot build trust because of the necessary transparency. Again, we support both containers and VMs. You can't break the device during updates because it's at that lowest level and it's optimized for this constrained hardware outside of a physically secured data center. Gateways, servers or otherwise. So that's why we think as a project community, we're building up the right approach. It's not to say there's not other good solutions out there, but we just think this is the way to go and kind of do for IoT edge what Android did for mobile, as I keep mentioning. So in terms of use cases, we've got a variety of different ways to do it. These are kind of more of the horizontal use cases. It's certainly applicable in many markets, manufacturing, industrial, oil and gas, energy, retail, healthcare, you name it. We've seen folks from all walks of life working with it. Workload consolidation. So this is basically, I take a compute resource. Could be a gateway or server or otherwise. Small cluster as mentioned. But in the case, call it an IoT gateway. I'm running Eve on top. I ship Eve out. I provision it. I've got my choice of controller in the back end or on-prem, just a remote controller. And I want to lift and shift legacy applications. So if I'm in a manufacturing world, I might have like a legacy Windows application that has to run in a VM. And I don't have to reinvent it, but I can just put it on the same workloads as maybe next to it. I'm running one of the clouds, the Edge solution for one of the clouds or I'm running an Edge X foundry connected to my choice of back end. I'm running Fledge or whatever, other projects within LF Edge. Pushing AI models down. So I'm like kind of, you know, this hybrid workload. So that's kind of one use case, helps you reuse existing investments while investing in kind of modern new cloud-native apps alongside of Eve's the common base. We've seen a number of people using Eve in another model where it's less about running apps and it's more about secure data access. I mean, I literally use Eve as a base for an appliance that then does secure network proxy. And I'm pulling data out of machines or wind turbines out in the wild and pumping that to whatever, you know, we're just, we would just as a project be that lower level abstraction and then you use your choice of controller. What's not shown here is whatever the back end is, but again, Eve is not in the data path. We're not trying to get the data to, you know, our controller or open source or otherwise. It's about the orchestration and then you choose where it goes. And the last one, Edge security analytics. So this is similar to the first one, but I'm not in-band with the data per se. I'm out of band. So maybe I have a box that has a span port. I'm sniffing traffic. Maybe I'm putting a intrusion to technical system or some sort of, you know, active, you know, AI based ML based threat analytics. I'm monitoring traffic. I'm triggering events of something, you know, fishy's going on, but I'm not in the data path. And again, you know, Eve and whatever controller you use provides that universal abstraction layer. So this is about gaining visibility about what's happening on your network. And so, you know, we see all kinds of different use cases and of course different markets and whatnot, but these are sort of the patterns that we see, the deployment patterns that we see. So we've got all these features in there. You know, I've been mentioning a lot of them. It's that root of trust element tied to TPM. If you don't have TPM available, we can virtualize that through Eve. You know, it provides that, you know, the distributed firewall capability and those open APIs for managing the container objects or the Edge compute objects, whether they're containers or VMs. Lots of detail online, you know, about the project. You welcome to go to the Wiki and dig in, download the code to go to town. And this roadmap is also online, but just the name of the game for the project as a community and we run regular TSCs, there's a link I think at the end of this, is to continue to build out that framework. You may always strive to reduce the footprint. I mentioned 256, you know, is kind of the lower extreme of megs and memories that we're seeing, you know, we're a little higher than that today, but you know, we've got all these extra features, you know, and of course, if you don't need VM support for legacy, you can actually, we've modularized that hypervisor so you can actually use your choice, not only your choice of hypervisor, but also, you know, not use that if it's only meeting containers. We're not trying to reinvent anything. You know, we've adopted recently container D as sort of like the, which is the most popular runtime for containers. We've adopted that. You know, certainly we work with Docker and as I mentioned, we're starting to evolve into support for Kubernetes by bridging to the great work happening in K3S, but we're not going to do it carte blanche. We're going to do it in a way that makes sense for the IoT edge and especially makes sense in the case of having to write security elements underneath because these are devices that are not in a physically secure area and so you have unique considerations. You know, I mentioned the modular hypervisor support. Initially, we were opinionated on Zen hypervisor as a community. We've had folks out in the landscape asking about KBM, so we modularized it and made KBM support. There's an open source hypervisor effort that Intel has been leading the development of in the community building around this real-time hypervisor called Acorn. Very lightweight and that can help supply, support mixed criticality workloads. So, you know, the mixture of time critical and more time sensitive workloads. And then we're just always looking at new ways of creating kind of north, south, east, west communication capabilities between different edge nodes, edges to cloud, but in this case, edge-to-edge data flow. So that's like between different compute nodes at the same tier. So, all about starting where customers are at today. They need to be able to support legacy and then also containers, but then also at the same time evolving to not reinvent stuff, take from various upstream and downstream projects but create kind of that one-stop shop, the one stack that you need to pre-install on hardware and ship it out, and then you take advantage of an open ecosystem of value add. And so that's, you know, we think it's super important for all the reasons I've been mentioning. Of course, you know, you can't, nobody, you're not anyone if you don't have Raspberry Pi support. So, Stefano from Xilinx, there's some blogs that we're going to be doing about this, but in some others, you help build this image for the Raspberry Pi 4. So you can get that today. I think in here, if you just search online, you'll find it, you know, but basically you can get Eve running on Raspberry 4. You can go work with the controller of choice. There's some controllers coming that will be, you kind of come with dev kits and stuff, but now you take your pick, you know, build your own controller if you'd like. There's a very simple controller today, as I mentioned, but yeah, this is good for kind of tinkering and start to kind of build the solution with your choice of value add around, you know, a very low cost point of entry. Definitely check that out. And again, thanks to Adam and the community, Stefano and the community for contributing that support. So we actually talked about this on a webinar that you should be able to find, you know, online recently about Eve. So anyway, it's a really cool there and there's lots of other hardware from companies like Adantek and Lanner and Supermicro and, you know, OnLogic, Dell, you name it, HPE, you know, all on that even market page. So just take a look and, you know, get involved if it makes sense. So we'd love to have you. Speaking of kind of involvement, as a community, we've grown to almost 50 unique contributors. I mean, literally the last time I updated this, it was 40 and that was a couple months ago. You know, I mentioned Xilinx, Stefano Xilinx and some others were doing great work on the Zen stuff, you know, related to the Raspberry Pi. We're working with Intel around Acorn and there's other providers that are making contributions. Of course, you know, Zedita, you know, as a company, we contributed Eve. We have a commercial controller, but at the same time, like I've said before, we firmly believe that it's best for the community, for the overall market to have a completely open foundation. That's what we put it into Linux foundation versus open source it on our own. And then that allows a community to transparently develop that. And then of course, you know, it's just, you know, rising tides, float all boats type thing. So we've seen Eve deployed in various different use cases. So, you know, wind turbines, as I mentioned, that secure proxy is with an example there. We've seen people exploring it and leveraging it in hospitals. So say I'm a medical imaging manufacturer, a machine manufacturer. I've got these imaging machines I put into hospitals as a service for my customers, but I don't own the network that they're sitting on. I'm in the room where the patients are. I'm not in the data center. I'm trying to do medical imaging to deploy analytics models for those machines, but I don't know what the network looks like. So I need distributed firewall. I need zero trust model in general. So we see that in manufacturing plants. We've seen people use Eve, not only to do workload consolidation, but also just to securely create a segmented network to get data into some sort of cloud application. And so these network overlays over the process, you know, at all costs protect the process, but I need to pull data out to get to some other system for further analytics and just visibility in general. And because of the distributed firewall capabilities and all that lower level functionality because of the bare metal foundation, I'm able to do that. And then, of course, in oil and gas, you know, you've got wells out in the middle of, you know, remote locations and people are using it to deploy AI models to run predictive maintenance. We've got folks, you know, the wind turbine folks are using models to listen for flaws and, you know, potential issues with those turbines. And it's really about collaborating with various other projects as we go. So hopefully this demo works. This is actually Sarah from the Zadita team. This was at a conference where we were presenting with other LF Edge projects. And so hopefully this video works in terms of audio, but you're kind of a quick demo. I'm going to start with some part of the Fledge project within LF Edge and just kind of showing how Eve comes together as that orchestration layer. Hi, I'm attached to a small, a big box running project-based software. On top of that software, it virtualizes the hardware network and application layer. We have a small application that's translating the sensor data and then sending it in real time to apply servers so I can see any maintenance data. I can get vibration information, information about temperature or humidity, things like that that might tell me the general health of my wind turbine. It's very easy for me to know in real time. And also, with this box, Eve very easily has other applications beyond my application running my wind turbine with sensor information. I'm going to go into the interface, go into my catalog, and from here I could choose an application such as Edge X Foundry and very easily be able to deploy that out to my box as well. Okay, I saw in the chat that you guys didn't see the video. I'm watching it here, but that video was online. Of course, Sarah is talking about okay, so maybe it is working not sure if some folks had an issue, but that video is online and Sarah is really talking about a use case, but the Eve component in that is providing that abstraction layer. Okay, so you saw a picture of Sarah and her talking. Anyway, so that's kind of, we decided not to try to do a live demo over this platform, but anyway, check it out. Get involved. Hold on. There's a lot of links here. If you just type in LF Edge, Eve, you'll find a bunch of detail. All of the technical steering committee meetings are open. Of course, the value of the Linux Foundation in general is the governance that they provide, all those extra services, but then we work as a community. Of course, we'd love folks to join LF Edge as a member, but it's a technical meritocracy, so you just get involved and the best way to vote is with code. All the Wikis and stuff are here. With that, I don't know if there's any questions, and we I think we'll kind of move to any questions as you guys have it after I kind of summarize. The summary, we talked at length about how the IoT Edge is unique. There is no single edge. It's a continuum very, very unique and it's not the same as the data center. You want similar principles, but they're necessarily different tools because of the diversity of hardware and software and protocols and the resource constraints that need to be autonomous in operation. You've got different degrees of latency. Is it critical or sensitive? Too many people say real time. Real time to a building's operator is 15 minutes usually. Real time to an airbag? A little faster. The security elements, I talked at length about the scale factor. Tens of massive public clouds, eventually trillions of extreme devices in the smart device edge is somewhere in between. It's an exponential scale. Really important to think about the scale factor, think about all these constraints. We're talking 256 memory single note up to a small cluster, but we'll do Kubernetes right and we've got lots of stuff spinning up around that thing. If you'd like to help bridge Kubernetes to Eve, get involved. Lots of cool stuff happening now, but again, we'll do it in the right way. Super important to have an open architecture. Yes, you can go build it on your own. I need to decide if I'm going to build on my own or not. If you build a silo, you miss out on the party, which is going to be this V to X to X, interconnected ecosystem play. This is about everything connected, maintained privacy. But if you build trust and you set privacy terms that you're okay with, the ultimate goal is to start interconnecting different use cases and building new infrastructure. I'm going to start off this snowball effect in the broader market. Imagine if the internet wasn't open and you don't have to go crazy with it. Put the right tools in today and architect today from grounds up. Start small and then scale over time. And then of course, Eve is really built to be that for the IoT edge. So outside of physically secure data center, still capable of running apps, headless, Linux space typically, but also I need to support Windows. That's a very well established ecosystem. So this is about doing that for the IoT edge. So with that, I don't know, again, if there's any questions, I think that we're probably running out of time here. Sorry about the video. And then there's also go check out the LF edge YouTube channel. There's a variety of different demos of the projects and we've just got a lot of good content there. So we appreciate the work that the Linux Foundation team does for us as a community. So, yeah. And I see May talked about the link for the video. So if there's any questions, I think that doesn't look like we have any additional questions must have explained it perfectly. But yeah, reach out and get involved in the community. If you have any other questions, you can always go there. I'll try to get on the slack after this. I'll have back to back meetings. But from there, it's definitely look look for you guys in the community and would love to collaborate within the project. All right. Thank you.