 Okay, welcome everyone to the Linux Foundation Open Source Summit, close and far. My name is Kathy Jory, and I'm here to talk to you today about Project Eve. It's a project under the Linux Foundation Edge crew, and it's meant to take the advantages of cloud computing and deploy them to the Edge. So it should delight app developers because it makes it easier to get their applications to the Edge. I work at a company Zedita that has a commercial orchestration tool for it. So a little bit about me, I'm actually a double E, so I really appreciate working with high-quality software developers. It's always been a tool, tool to make my life easier, tool to like make my life better. And so when we can do these things to orchestrate edge computing and make it easier for someone like me, it's a really good thing. I've been doing open source for decades, and I'm a very big fan from educational standpoints, from just industry community collaboration. I've seen a lot of examples of innovation being much better off when you've got industry and the community collaborating together. And before I get started on this talk, I just wanted to shout out to a couple of my favorite projects on the physical computing non-profit microblocks project where you can teach your kids, teach students everywhere to program these microcontrollers, which are sort of the root of Internet of Things, right? These are the censors and actuators of the world. And microblocks is a block space program that's core developers, John Maloney, who also wrote the first 11 years of Scratch. And so you can get kids into STEM education at a lower level where they don't have frustration and pain, and so I'm a huge fan of that. I also worked at Mozilla on the WebThings project, and that still powers my smart home privately and securely. And last year, in fact, I did a private smart home tour of that because I was giving a talk from my home. So that's another great project. But the focus of the talk today is this open source software of Project Eve, and then I'm going to give a live demo at the end where I'm going to boot up a server with Eve running on it and show how I can onboard it and manage it from an Eve controller. So I work for Zidida, the company that has biased opinion here, but that's commercial controller for managing the open source EvoS running at the edge. So first a little background, what is edge computing? Well, edge computing is, in a nutshell, everything outside the cloud. So the cloud is the data centers. Somebody else has got managing the hardware for you, and you're running applications and storing and processing data. Outside of that, you have maybe regional edge and access edge where there's other big industries behind it. And then it becomes everyone's field offices, deployments, applications, industry, manufacturing, energy, healthcare, you name it, even down to private smart homes. So the user edge has three tiers here. The lowest tier of the constrained device edge, you know, don't talk to me about that, Jonathan's in the back. He's working on that constrained edge side of IoT, whereas EvoS fits on Linux servers, PCs, gateways are more, you know, x86. But so the sweet spot, I think I circled it, well, no, I guess I didn't circle it, but it's basically the smart device edge and the on-premise container edge. And this is from a taxonomy white paper. Then I want to talk about the applications that run on the edge compared to the cloud native. Well, what happens when you deploy apps and containers to the cloud is you're like, well, what operating system is running actually below your applications? Well, I don't know. I mean, is an OS really like GCP? I don't know what, like, do you tell me? I honestly don't know what's running in all these public clouds. But when I want to deploy my container, my generic container down to the edge, I have to say, well, is it Ubuntu or is it, you know, the RPMs or Debs or can I run containers or can I deploy a VM on top of, you know, like, what OS is it? What hardware is it? What architecture do I have to compile for? There's all these questions you have to ask. And so why do app developers like cloud native? Well, it's because they have to ask less of those questions. You know, they don't have to manage and maintain the OS of the underlying hardware and somebody else IT security upgrade, somebody else is doing those things and specialized handling, you know, of all the public clouds. And the multi-tenant application space means that somebody else's application is far less likely to, you know, step on yours or cause interference with yours. So the container orchestration has all sorts of great tools and Kubernetes and so forth that allows you to manage your application lifecycle, you know, in a sustainable and convenient way. So the OS is no longer your pain point of some gigantic integration. App development is one container at a time, essentially. And at the edge, it's quite a bit different because you can't be so blind to the hardware. It's not hardware agnostic. The X86 is still dominant if you look at Docker containers, architecture supported, but that's kind of on the industry side. So ARM is becoming more and more popular, I think, in vision applications, AI, ML. And hopefully you'll go next to Drew Frustini's talk about risk five because, you know, we'd like to see risk five start to take off in this market as well. I'm a big fan of open hardware. And then because you're dependent on what the host is, you have to know what packages are installed or you're going to have package conflicts with other applications that are running on it. Somebody has to maintain it. Legacy software is not always easy to containerize, so you can't always have just one fixed environment. And so, yes, the OS at the edge does become the integration point and or pain point or you could pay someone to help you out. Now we're going to jump into what is EvoS. Well, EvoS is essentially our ideal of an open OS for the edge. So the cloud has their OS as they're not actually, or we don't know what's behind them, but at the edge, EvoS is entirely open. You can see all the way through it's hosted on GitHub, it's a project under the Linux Foundation Edge Group. So you might find gateways in cars and wind turbines and all sorts in manufacturing plants and healthcare, you name it, it's all over the place. So as this project was initially developed, the main goal of the project was security of that edge hardware so that you never lose control of it and you don't have to truck roll and go off and visit that hardware because something torqued it or whatever. So the hardware layer is the EvoS, the edge virtualization engine and all the applications get deployed on top of it, all the applications can have their own network policies, their own dependencies. And this diagram shows that there's actually, you have to have enough for two partitions because you do upgrade EvoS on occasion, or you can, certainly people do. And you have a complete partition fallback so in case the upgrade fails, it will fall back to the other partition. So it takes about half of 500 meg for the EvoS partition and then you can run, you can deploy Windows, you can deploy K3S, you can deploy your own containers, your own virtual machine, Nubuntu's, anything you want, basically on top of this. So it's not a restricted environment. But EvoS does not stand on its own, it does need a controller because that security by design is an open API. What happens essentially is, you see that EVE controller is the way that IT manages the deployed infrastructure, and then the dev house people don't really care where the hardware, where the edge hardware is running, they're just still deploying their applications and getting data back from it or processing it. But the edge could actually be in a windmill or in a truck or in some giant robotic machine. So and the nice thing is that it's not just containers or just VMs, you can support them both in parallel, multi-tenant. It's all open source with the API and there's an open source EVE controller instance, so you can look at everything. So it allows people to choose their own hardware applications, public clouds that these apps are connected to. It's really, EvoS is an infrastructure layer, it's not a data layer thing, you put data on top of it. So this shows today on the right side of this diagram where you have all these edge nodes exposed to the weather, I call it cyber attacks. And then what Zidida does is we have a commercial EvoS controller that allows your IT staff to say, oh, okay, all you dev house people, now it looks as if your application is just part of your cloud space. They're in their own cloud and their application is maybe collecting sensor data or actuating at the edge, but they're still in their own world. That's my z umbrella look at the world. And now I want to talk a little bit about what's under the hood, you know, what is EvoS under the covers? Well, it heavily leverages Alpine Linux and other package systems. So because OS is no longer need to be that single integration point for everything you do, Alpine Linux was based, you know, it's a very good starting point for containers that's based on just enough OS for any occasion. And, you know, one of our co-founders is active with the Alpine Linux project as well. And if you compare it from a size standpoint, you know, to some big server stack, it's quite a bit smaller. Alpine Linux itself is just about 140 megabytes. And so it's got busy box and musil and a no-nonsense in its system, as they say. Then there's package management, fresh port served up daily, and it's pretty simple to add the packages and they have them all nicely organized Alpine Linux packages. So that's a nice way to figure out what you need. And essentially EvoS's unit of integration is just these container images of applications you want to deploy. And it leverages the Docker registry heavily like you can point to a Docker source and then download, you know, your containers. And I'll be doing that part of my demo. I'll be fetching a container from Docker deploying to the edge. And then EvoS also leverages Linux Kit and Container D. And that's how, you know, all the build system infrastructure leverages best practices, opens software, and manages the applications. So there's a couple of other important projects. There's a whole bunch of several other small projects that are leveraged. But to bring it all back together, what EvoS does is allows you to think of not just all the data center cloud servers, but actually a whole bunch of deployed EvoS boxes. And the EVE controller makes the applications running on those boxes appear as if it's just part of some larger giant global cloud. I don't know. So just as a reminder, EvoS does not function on its own, it just sits there. Its only connection, secure connection is to push data out to the EVE controller, like every five minutes or that's configurable. And so you don't actually need to poke firewalls or do anything to get access to it. In fact, if EvoS is running on the box, wherever you plug in network or connect the box to a network, it will phone home to its EVE controller that's specified in its initial config file. It finds that EVE controller, sets up its TLS link with certificate exchanges, and thereafter that EVE controller is managing a collection of EvoS boxes. And so you never need to port forward or poke firewall holes. You need to do that for the applications that are running on EVE, but EVE itself is very flexible. So I'm going to do a couple slides of explaining a live demo I'm going to do, and then I'm going to attempt to bring up a server, run EvoS, install EvoS on it, connect it up to my EVE controller. I'm going to use the Z Cloud Zdita solution, and then deploy another open source project under LF Edge called Fledge, and that container has been, I've been working with this team at Dynamics to do a demo for OSDU and oil and gas and energy consortia. So for my little demo, I'm going to leverage Equinex Metals server farm, their bare metal farm. Has anyone used Equinex metal before? Just one? Okay. It's pretty cool. I have fun with it. But because I don't have access to the box, I'm going to bring up some server in Silicon Valley in Santa Clara, San Jose, somewhere. The only thing I know about it after I install EVE using iPixie, is anyone use iPixie? Another super cool tool. Then I know the IP address, but that's enough for me to have one unique, public IP address is unique, so I can use that to bring it on board to its controller. And I'll point you, I'll show you that EVE OS is defined in this pixie file, which is just like less than 20 lines long. And you can see an example in the GitHub releases, and that is enough to tell it how to boot up EVE OS. So, all right, so now that's my queue, I need to skip over to, okay, now I have to prepare myself with glasses, okay, let's see the fine print. Okay, so here I'm already in, already logged into my Equinex metal account, well we have a corporate account, and I already have one server that I'm running that I connected up yesterday. I put dates on it because you rent these by the hour, so I'm just going to pull up another server on demand, and it will give me a choice of locations. I want to bring mine up in Silicon Valley, I'm going to choose the cheapest one. This one doesn't let me demo remote attestation, it doesn't have all the hardware, TPM's fancy stuff that some of the bigger ones have, but it's only 7 cents an hour, so I'm going to choose that one. And then they have this easy to run custom IPXE OS is what I want to install, and you can also do what this GUI has is also available at the command line, so I could create a script. In fact, I have a deployment guide deploying EVE OS where we show what's in this IPXE script, a little bit about how to do this, and if you wanted to do this by the packet CLI, you could, you could deploy these servers as well, but the GUI's a lot easier to show. And then as part of this, I need this IPXE config file, so I'm going to copy this link and drop it into my here, and instead of copying this, downloading this file from GitHub, I could also just click this add user data and put my IPXE config information right in the text in this box. I also like to just add the time and date when I start it because I can cut it off by the hour, so 11.18 helps me keep track of which is which, and I say deploy. And then, so this is the new one deploying right now, and it has little view progress bars that you can look at, you know, it's configuring the network, but I like to actually look at the remote out of band console, so I'm going to copy that and jump over to here where I was playing around with this earlier, and what this does is it gives me, there's this little, I can do the little help, so you can see it has an out of band console where you can basically talk to this bare metal box that is running in Silicon Valley, and once some of these boot init scripts start occurring, I should be able to see some of the boot up process flowing on this screen, so let me just click over here, so as this is spinning up, let's see, pull up this, and pretty soon there it goes. It starts flowing, it's configuring a bunch of stuff on the bare metal, and we'll be able to watch it next fetch the images, and again it's downloading from GitHub, it's actually fetching all of these initialization, the root of VAS, the kernels and stuff like that, it's fetching all of those packages from GitHub, and so that's why we call it boot from GitHub is it sets up all this stuff, and in the whole part of this demo we have to be a little patient because some of these processes churn, and they take a little time, and then they run stuff, and then they mount things, and then they reboot, so it's restarting the system, I don't know if you can all see this, is this big enough, it's restarting the system, and then there's going to be a screen that shows the actual, we'll eventually get to the evo as logo, but first it has to download a whole bunch of packages, so I think the iPixie stuff is working its way, and now it's configuring some network interfaces, and the next thing you'll see it start downloading packages, there we go, so you see some download URLs for this release, the 6.13 release that I pulled, and everything gets pulled down pretty fast until it gets to root of VAS, and then it takes a little more time, so it's still churning away at 39%, if we jump over here you can see that this new one I installed is now online technically from a network standpoint, and now evoS is still being installed, even though network-wise they have ways of talking to it, so you can look at traffic, it's not much yet, this is not running, there's a timeline of the details, and general information about it, let's go back here, we're at about 98%, so pretty soon we'll see the evo logo, and even while we're waiting for that I could start initiating the onboarding process on the controller side, because you can configure the controller to be ready to accept the onboarding of the evoS box, even before evoS is up and running, so whoops that was just when it was doing that, so here's the evo logo right up here, you see that, so without further ado let's just run over here to my Zed controller, and I'll show you I already have one box online that I connected to this account yesterday, and we're going to create another one called oasis for this conference, and just remind myself it's an x86, and I'm going to do the type onboarding key, and for this it's really tricky, I actually just need the IP address of the box, so I'm going to take the IP address of this box, drop it in as the serial number, and then I need a special onboarding key that was part of the evoS image, drop that in here, and then I tell it the brand of the box, this happens to be a super micro, and we have it defined as the packet T1 small, just going to do standard DHCP on the Ethernet interfaces, and then when you define these boxes you can set up policies for all of the different resources on the box and the ports and everything, but I'm just going to add it as is, not worry about those ports because they don't have access to them anyway, and then it was added, and you can see the one from yesterday has this little green thing that's online, this Fledgie demo if I click on that, that was like last touched over a year ago, so we're not going to worry about that, but then this is the one that we just brought up, and now the controller side is waiting, and it's just in this waiting state of okay I created this and I'm ready to provision something, but now I have to wait for evoS to reach out to me, so in the meantime this evoS box is going to be reaching out, we see traffic, and now we see it's downloaded all these packages, that's that green inbound line, so it should be reaching out to the controller soon, and it will take up to five minutes or more, but I think we keep watching the events, it will slowly move through from provisions to boot and reboot to come online, so let's see, let me get rid of that one, status, provisioned, here's where we have to be patient, okay there's one more little step if something is registered, if I keep reloading this page we shall see a few more things come up, and then I'm not going to go through the entire orchestration controller side, I really want to mainly focus on how I can take this box that I have no, you know, somebody's plugged in the network and it's ready to go, and then I'm going to onboard it to this controller and deploy a container to it, there we go, so poof, it did these last few steps really quickly, and now it shows status of online, just coming online and the basic info, so this IP address here 239 should match this IP address up here 239, so in fact that came on and that's the server, and then so now we have the base edge virtualization engine, now what do we want to do with it, well you could deploy clusters to it, you can do, there's crazy lots of things you can do in terms of your data stores where you can pull images from, there's EvoS images that you can update, there's application images that we've been building to test on this server, there's volume instances you can add, there's crazy lots of things, but I just want to quickly deploy a container application, so even though there are commercial ones like AWS and Azure IoT, this fledges this open source project and I'm going to deploy this one, which is just a demo for OSDU, just a single deployment to this one box, and I need to give it a name like OSS, OSDU is my demo, and next, next, next deploy, and all of those other screens said, do you want to keep the default network instances and so on and so forth, there's already another box running another container running sort of different applications, but the cool thing now is when you watch this, it will go from, okay, I created this instance to, again, downloading the packages, deploying the packages, and booting the packages, and so right now you see the state downloading, and then I'm going to go, so init to downloading, and then I'm going to just jump over here, when you look at this side, which is the actual box, you can see the CPU utilization and the network rates, it's network, we don't see it coming in yet, but pretty soon we're going to see the network traffic come in because it's downloading, loading these containers, so it's initializing some of the disks and putting the content ones online and loading them up, so switch back to this one here, you can see now it's creating the volume, so it's creating the volume, the image on the actual EvoS box, on this bare model box that's somewhere in Silicon Valley, so the whole idea for this is that you have IT, support people that don't have to be super technical in the field, plugging in networks, and EvoS is already running, it phones home, the security by design API connects it to its controller, and then from the controller you can deploy all the software you want and manage and maintain it and orchestrate it, so really the project EVE is about IT infrastructure, and applications is still, you know, everything that DevOps and other people want to do, so the data plane is in another space, but this fledged demo is simulating what types of applications people would want to do, so let's see, it's still creating that volume, and in this case we should start seeing network traffic, here's some traffic right here, and you can see it creating the volume, and I just wanted to show again there's reports for things, where the server is located, all the stuff we've been doing, so there's not too much going on here, someday I'll be able to demonstrate Kubernetes clusters, so this is a, you can set up an enterprise so your K3S clusters can run on here, see if you, utilization, oh in fact it's online, so I missed those final few steps, but after the creating the volume it went from booting and now it went from online, so this application now is online, and you'll see it also when you look at the status of the edge node that we deployed it to, so here's all that traffic of downloading, and if you look here also it reports as being online and how much resources it's using, and again because this Equinex Metal Box has a public IP, I can take that IP address, I can go back, I can show you the basic info, this was preset up in advance, but when we set up this container we applied inbound and outbound rules, so the inbound and outbound rules say I'm going to have port 80 interface of this mapped to port 8082, and so when I take this public IP address and I go to port 8082, then I should be able to boot up this application that I just deployed to a server in Silicon Valley, and right now because this container is for an OSDU demo where the box won't, we won't have physical access to this web interface because there's no, it's behind a corporate firewall or it's behind whatever, we won't have this capability, so it's been preconfigured to run what's called these plugins, the north and south bound modules, so right now it's running scripts that set up the north and south bound, so south is simulating oil data, drilling data, and so it's ingesting them with a CSV file, and it annotates the headers and so forth, and this is a particular industry way of exchanging these types of data with, and then the outbound, we're going to consume it and view it with some protocol called OPCUA, so this is how I'm just doing a playback of the data coming in, and then north, which is really the outbound, here's this data and we've massaged raw data and we've turned it into OPCUA data, and then it's available if you substitute local host with the actual public IP address and you had an OPCUA viewer looking at that, you could actually then see streamed oil and gas data or this drilling data, so again it's not my domain of expertise, sorry I keep doing that, but that's essentially what the data is, and then you see the receiving's coming in, the receiving's going out, and so all of this was just based on us pushing this container, it's just a docker container to the edge, in fact I just want to show you really quickly the app image, the one that I deployed was this OS, which one was it? I think it was just this one, you know, so this is a fledged container specifically for this demo. And that, so in a nutshell, there's my edge nodes, now I have two nodes, and I'm running applications, and then you can deploy as many applications as you want on them, there's no limit to that, and these two are both in the same, sometimes I get a dot in another place, they have a couple of different data centers that are technically called Silicon Valley, so I didn't want to spend too much time on this demo, but I did want to give you a sense of how you can, you know, have IT centrally managed, and then manage all these deployed things, and so the beauty of EvoS is you want to deploy it to the edge, and you never want to have to go truck roll back to it, so as long as EVE keeps communicating, as long as it's on a network and powered up, it keeps communicating back to its controller, and then the IT person, like if that application goes rogue, they just blow it away, in fact, that was one more thing I wanted to show you, is how you can blow these things away, so okay, now I have this application running, and let's say I don't like the application, all I have to do is say, okay, bye-bye application, you're poof, you're gone, and then this instance will go offline, it will cease to exist pretty soon, because I just deleted the running instance, we'll come back to that, and then the other thing I can do is the box itself, since Equinex Metal charges by the hour, and as this box is online, I can just delete the box, and poof, it's gone, and then finally here's my server in Equinex Metal, I'm done with it, and I delete the box, and poof, it's gone, I still have one running, but you can see pretty soon here, I think you'll see, and then my out-of-band terminal just terminated, so you can see that it really did go offline, pretty soon Fledge will realize it's not connected anymore. So that's the live part of the demo, and I just wanted to thank you all for watching and listening into the EvoS Project EVE under the Linux Foundation, we have, I should have put a bunch of links up, but there's slack.lfedge.org for our Slack channel, actually maybe I should just show you a couple of quick URLs, places to find Project EVE, so Project EVE is on GitHub, it's lf-edge-eve, and there's great documentation under the docs folder, just has an incredible wealth of documentation about all this stuff and how it all goes together, it really is a cool but complex, like all this complexity makes my demo simple, right? Like I didn't have to do all this stuff, but it makes for a really cool demo when you have all this stuff working well, and that deployment that I talked to you about, there's a write-up about that, so Project EVE is the main thing, but then there's also just under lf-edge.org I think is the URL, then under lf-edge there's all these different projects, so you'll find, here's the lf-edge project which I demoed the application, and then here's Project EVE, and then there's resources here, and this will take you to the code, which is GitHub, or the documentation, or the wiki, and how to join the mailing list, so all of this is the community project driven, and we welcome you to that community. The other place where I'd really like to see more growth is on the hardware side. There's open source controllers that you can use, that's not what I want to show, but if I go to GitHub, lf-edge, I just want to show you the models. The whole idea is that edge hardware is very diverse and complex, and we want to support any of this hardware with evo-s, so here if you go to this community supported hardware models, you can see all of this hardware that has been known to have installed and run and tested evo-s, and it has a hardware model for its configurations. I recently worked with a team to get these tank boxes online, and all the way down to like I have a Raspberry Pi running in my house with evo-s running on it. It only runs on a Raspberry Pi 4 with 4 or 8 gig, but you can run that as well, and Jetson Nano, but mostly it's x86 stuff, so all of that stuff is available if you branch out to all these projects, so thank you all for coming and listening, and I will take your questions now. Yeah, I'll repeat the question. Keep going a little bit. Yes, so the question is, you know, I was demonstrating deploying Fledge, and Fledge is like an app framework where you can set up a whole bunch of modules and do all of this edge processing. It does data processing, filtering, reformatting, so Fledge is one of these containers. There's another edge-x foundry. If you look in our marketplace we have some of these common container style applications, because you reduce resources. You can deploy anything you want, and so we actually have a developer program too, I forgot to mention. If you want to have a trial of the Zed cloud for your own evo-s instances, you just, it's zedita.com slash welcome-developer, and you can sign up to get a free trial, and then you can try out, you know, basically if your container, whether it's a private docker registry, you just put your private URL and your username password, it fetches from your private docker, you can pull from public docker, you can, we have a whole bunch of images we host just on an AWS somewhere, so you just have these data stores where you point to, and you can fetch any container or any application, and you know, you can run a Windows 10 virtual machine and then run Fledge right next to it on the same box, it's, and then the real, we're doing integrations for our big customers like with Terraform and, you know, K3S was one of those requests, and Azure IoT, so Azure IoT now you, all the setup is kind of in the Zed cloud, you set up your instance with all the secrets to your hub, and then you deploy it, and you know, you, like I said, you know, your DevOps people, your data people are working in their, in their own cloud world, but the IT people still own the box, like they still own the hardware, and if anything happened to that, Azure IoT or had some vulnerabilities, the IT person can still won't lose control over the box, so that's the whole premise is these colonial pipelines and other places where some OS gets compromised, then when the OS is compromised, what happens to your data, you don't know. So, good question. Any other questions about Eve, project? Are any of you app developers, like in the container space, or some? All right, well we hope, thank you for coming and spread the word, and hopefully people online will get a chance to check us out, and hope to see you on Slack. Slack.LFedge.org and invite you into the project. Thanks.