 Today I just would like to talk a little bit about the best practices and approaches we are following in order to support many, in order to support EVE build and distribute across multiple platforms, but with a focus on ARM. So I just would like to go through some very brief introduction and talk about a little bit about the EVE project, how we build EVE, and then jump into how we manage its stuff and on ARM, how EVE works on ARM, our build system, the ARM devices that we supported so far, some upcoming devices as well. So I think, well, when we talked about the channels at the end, when we're talking about edge computing, there are a lot of big channels to solve, especially in respect to security. We don't have guarantees about the network security. When you talked about some data center, we were talking about very much more controlled environment. So we do have control over your network, internally your infrastructure. We don't have these things, or mainly we don't have these things on the edge. So we have no guarantees at all, like about physical security, about the network security. We have a very different diversity of different infrastructures on the edge. A mixture of remote advice, there is a, to not talk about like the plethora of apps, jokers, straight to deploy, to configure, to update. So this is really complicated, right? And you must be able to integrate it for a lot of different men, several clouds, several types of cloud systems. We need to scale and we need to provide scalability and automation as well. So there's a huge number of edge devices out there. They are usually geographically dispersed. And when we talk about devices, for instance, industry 4.0, these devices, once they are deployed in the field, they stare at it for several years. So we must support a long maintenance lifetime, like for several years more to give an example. And we also need to count with one reliable connectivity. So there are network outage, the latency, expensive bandwidth. So we might not even be able to control all this stuff on the edge. So any edge is the must-take care of all these challenges. And they can be solved with edge virtualization, right? So I don't want to go through this whole infrastructure. And these are the many projects, all the projects under their umbrella of LF edge. But I would like to bring this slide here to explain exactly where Eve is located on it. Where Eve is located in the software stack. So Eve acts just on top of the hardware. So Eve is the operating system running bare metal on top of the hardware and provide the virtualization environment for the applications. But more than that, it communicates with some controller on the cloud and allows the orchestration of these applications. So it provides a common environment for that. So when you're talking about Eve OS, the project Eve, it stands for Eve's Edge Virtualization Engine. It's a LF Edge project. So it's currently on stage two, the growth stage. And Eve projects provides the Eve OS. Eve OS is a Linux-based operating system for distributed edge computing. It's distributed under the Apache license 2.0. So it's open source. And Eve provides a common system base, like for many different plot of forms. So what we want to do is take care of the hardware complexly. So Eve wants to decouple all these applications for the hardware. Applications should not care about which kind of controllers do you have in your hardware, about the resources that manage the resources. So the goal actually is to provide this abstractor platform for applications by using virtualization. So Eve provides this standard API that allows applications to make the use of the resources in a much more efficient way. So it can also partition the hardware, the resources, and in order to increase these oracle consolidation. So Eve wants to provide to applications this common environment by using virtualization. And it allows the total control of these remote devices, the total control of the Edge device with reliability and with secure. So basically the architecture for Eve, as I said, it's based on Linux. So we do use Linux kernel. So it runs, it uses virtualization. So it supports different hypervisors. By default, we use KVM, but it will also support Zen and Acron as well. So it realize it can change the hypervisor underneath. If it's implemented, like using some microservices running through the container ID, and it has two partitions to ensure that if we're going to update, we should be able to do the rollback if something goes wrong. And the other, the rest of the disk is used for these workloads for the Edge applications and everything else. So basically it runs on top of the hardware. It implements these microservices using container ID. So we do run our microservices containers as well, guarantee the fully isolation between them as well. And basically if you can spoil these virtual machines, these Edge applications, Edge applications, they can be virtual machines or they can be containers as well. And it does that by providing a very standard API that can be accessed through the controller, the controller on the cloud, and it's an open API. So what actually are the micro components of Eve, right? So we base it our build system and our root FS generation using Docker and Linux Git. So we use Linux Git to generate the root FS of the operating system. And with the tools that we need, we use Docker to containerize all the two chains, the compilers that we use. So we want to minimize the dependencies, the software dependencies that the user needs to install on its host system to build Eve, to test Eve, and to run Eve. We also depend on many softwares that we need to install in our system. And to do that, we use Alpine as our package manager. So we try to shrink as much as we can our image, our image, our final image. So we try to keep our system as as minimal as possible. As I said, our micro servers, they run as containers, and they are written in Go Land. And we don't rely on any system manager. That means we don't use system D or system five. So we could say that Eve is a kind of Linux distribution, but it's really customized. And it's really shrink it to be minimal and be able to run in the across multiple platforms. And these across many different devices. So everything is controlled by our micro services. And we use a custom version of the Linux kernel. We try to not change that much. But for some specific platforms, especially for ARM, that we are interested in here today to talk about it. Some drivers, they are not fully upstreamed. So yeah, we need to patch a bit. But we base it on, we base it on our kernel, the mainline kernel. And on top of this kernel, we just apply a few patches and a bit more other patches for specific platforms like for ARM. And we also maintain our boot loader, mainly like we just use Uboot and Grub. Also custom because we need to include some features that are not upstreamed yet. And as I said before, we are based at Lanky and mainly as our default hypervisor, but it also supports other hypervisors as well, like Xen and Akron so far. So what are the advantages of keep these maker components? I know sometimes if you're relying only on Docker containers, this can consume a lot of disk space, for instance. But at the other hand, we can provide a very consistent while building the system, generate our images. So as advantages, we can keep the total control over the running systems. We don't need to leave with this any system manager. We keep the control by ourselves, by our microservices. And we can guarantee consistency of the build system. So we don't care that much about host system of the user, because we are providing our two chains, we are providing across our cross compilers through the Docker containers. So this makes easy to reproduce the issues to give support from our side as well. But of course, it comes with some drawbacks, right? So we need to control everything, since we are taking this full control. And yeah, it might be tricky to integrate to some custom board support packages. So it might be tricky if we want to integrate Eve with some another custom kernel with different drivers for specific platforms. And this might be tricky to integrate with ARM devices, right? So if it's distributed with binary images, can be interesting, but for binary images, we support like X86, AMD64, ARM64, and also we do support RISC 5. We provide images for RISC 5. And talking about the X86 environments, when we see these edge devices, X86 edge devices, they are pretty much based on PC platform. So they do have some bios that allow us to, that provide us some unified both methods like UFI, especially UFI. So we don't have any troubles to provide a single image into both our images on these machines. And the peripherals, they are usually connected through the PCIe bus. So they can be easily discovered and they can be passed through to the VMs. And this way, generate a generic image which is able to support multiple devices is not that hard. So we can provide a single image with all the drivers enabled in the kernel as modules. And these single images can support a lot of different devices. However, for embedded instances, this is not true, unfortunately, because we have many different methods, many different wrong codes, the way that the SOC boots, the software, especially on ARM, that might require customizing both loaders. So the the time is for the RAM, for the DDR modules, they are usually inserted inside a U-boot inside a boot loader. So we cannot use the same single image to multiple, across multiple devices, for instance, on ARM. So these require us to customize these images. So if we must be able to handle these different platforms, you see the problem that we got now. We want to keep as uniform as possible. And at the same time, support all these different platforms is different architectures, right? Because we really want to make easy to deploy Eve to install Eve and let those edge devices ready to go. So it can be built from different platforms as well, from different host platforms. So it can be built from an x86 machine, it can be built for the ARM machine. And this is particularly special nowadays with the rise of the Macbooks, for instance, with M1, M2, which are architectures. So we must be able to provide this environment for build, for testing, for run Eve, also from ARM, from x86, or other platforms. So we must follow good practice and approaches. We must take them into an account to avoid any problems and to keep the consistency of our image, right? And is there any tool that maybe fits to all these requirements? Oh, guess what? Yeah, we have containers that could do that, right? So how do we generate images for Eve? We are, although we are based on Docker, we are based on LinuxKit, we also use Mac files. So keep as simple as possible, keep really simple to generate these images. So if I want to generate, if I want to test Eve, I can just generate a live image and run it using QEM, right? So in order to do that, I just need to hit make the Z arch and HV parameters, they are optional here because by default, it's AMD64 and KVM. Unless you want to generate for ARM, then I show a bit later, then you need to specify. But in a nutshell, if you want to test Eve, you just need to fetch from our repo at GitHub, hit make, make live, make live run, and that's it. You should be able to come up with some instance Eve instance running on QEMO, and you can deploy your future device to a controller, right? So Eve on ARM, well, there are billions of devices nowadays running on ARM chip. There are hundreds of manufacturers because ARM, it doesn't differ from X86 only in the instructional set, right? Because ARM is actually only the instruction set, let's say, simplified in this way. But the SOCs, they're really vendor-specific, let's say, unlike X86, right? So inside each ARM SOC, we have many different controllers. We have many different devices and futures. We might have very different structures on the SOCs, like in terms of cores inside. So we might have multiple microprocessors inside the same SOC. This is very common nowadays. You can get different ARM cores inside the same chip, like Cortex-A and Cortex-M. You might have different coprocessors, like video processor units, neural processors units, right? And some of these CPUs, they might not implement the full ARM architecture, the full ARM instruction set. Like they can lack some extensions, like the trust zone for security. So we must handle with all these different SOCs. So how we could, in this way, try to produce an image that it's able to run on all these different platforms, all these different SOCs. So well, guess what? We can still use containers to build EVE. But at least for ARM, we do support cross-compilation, right? So it makes more easy for generating an ARM image from an AX86 host, you can still use cross-completion. And the good thing is that you don't need to set up nothing. You just need to fetch EVE. And since we're based on Docker, we're going to fetch all the containers. We're going to fetch all the cross-compilers, the tool chains. And this is going to be generated flawlessly, like in a transparent way, right? So for ARM, how do we generate EVE images in this case? Well, it's pretty much exactly the same as AX86 because this is our goal, actually. We want to keep the same, the uniform process as much as possible, right? So as I said, the same way we just use make, but now we just specify the Z arch, we specify this ARM64. We can also change, we can, between the hypervisors. But still, some platforms, they really require some customized images, like some platforms, they put the bootloader in some different offset in our image. So in order to support that, we need to come up with some additional argument in our build system, for instance, right? So we do support this platform parameter that you can test to generate EVE OS images specifically for some device. So as an example, we support ARM from, with SOC from IMX, IMX 8 MP, from NXP, this SOC. And we do support a couple of devices from this platform. And if I want to generate EVE OS image to be deployed on this device, this device, for instance, the last one is the Devantek. So I want to generate an EVE image for that particular device. That's pretty simple. I don't change the way I build EVE. So I build exactly as I build for x86. But in this case, are you passing the platform parameter? Are you build for specific for this platform? The final image that I got in my SD card or USB stick, I can deploy to this device. And that's it. The key point here is that the whole environment provided by EVE will be exactly the same, no matter if I'm running on x86 on RISC 5 or ARM 64, right? So what are the ARM that officially ARM devices that we do support on EVE OS? Well, we do support Raspberry Pi for the Model B to be more specific. In this platform, we also support an Devantek. It's a HAT, actually, it's a Raspberry industrial HAT that you can put all together in a drawstone device. The IMX8AM platform, we support three different devices. The Advantek EPC R3720, the development board, Feeboard Pollux is from a company called FuelTech from Germany. And we do support also another SOC of the same family, the IMX8AMQ through this evaluation kit board. We also support, we do have support for the Rockchip RK3399. And of course, we do support also virtualizing the environment. We do support QEMO. We do support Hikeboard and Jetsonano. These devices that I officially support in the SS, they have the drivers that they need to run. We can generate the images specifically for these devices and run EVE OS on this device. And of course, EVE OS was tested as well on these devices, right? For the Raspberry Pi, we also support some TPM chip, the SOB9670 that we can connect on the Raspberry. We do have support for that because EVE relies on the TPM functionality to do some key management and certificate stuff, right? For the IMX8AM, we support pretty much the whole hardware and peripherals controllers from this platform. We do support USB, the PCIe, the Ethernet, HMI, Canvas, the EMC, also the hard TPM present on both of these devices and LED as well because on these embedded, especially embedded devices, we don't have, we are not that common to connect a monitor to it, for instance, right? So EVE can also use an LED to give some status about onboarding process, about edge applications, the running or not, errors and everything else. And one very nice thing about this platform, we officially support a trust zone. So we do have support on EVE OS and we deploy the OPTO-S. OPTO-S is a trust OS that runs on the trust zone. A trust zone for those who are not familiar with it. It's an ARM extension and the trust zone just created a new execution level. So in these new execution levels, you can run completely insulated even from the hypervisor. We can run these trust applications in a crypto way, security, you know, and there is another execution level called EL3 to be more specific. And this execution level is meant to run a security monitor and these security monitors, the one that is going to spoil the hypervisor and the normal world, what is the way they call it, and the security environment like with the trust OS. So we pretty much will have another operate system running the secure OS with two different execution levels, but now secure in the secure world, like the user space from the secure world, the kernel space from the secure world. And we do support that. Just a common example, DRM systems, like I think I can mention here, like Netflix, Netflix use this kind of technology to provide their DRM stuff. And we do support that on EVE. So we have client drivers for the FTPM, which is a software implementation of the TPM that can run on trust zone. We provide a pair of RSA keys because OPTOAS comes with the public key. And this key will be used to verify, download, only authorized trust applications. Of course, the key pair that we provide is just for example, it's present in our build system, but can be very easily replaced with the customer user key pair. So when we deploy EVE OS on this particular platform, that means we should get automatically the OPTOAS running, EVE OS running, and if the user wants, it can build and deploy their trust applications directly on EVE OS. For the time being, we don't provide any trusted application. We like to the user to make, to develop their own trust application that they want. But yeah, but the whole infrastructure is there. The secure OS, the OPTOAS is there, it's running, so it can be used. All right, so we do support these other boards like the Jetson Nano, EVE OS. We don't have the full support yet for all the drivers on Jetson Nano, but EVE OS can run, can be deployed. And for the high keyboard, currently, the development is not that very active, mainly due to the short age of these boards. So that's why we are not getting too much development on this platform. But yeah, we still do support, we can run it. And these are the upcoming devices we are working on it. So any video Jetson X here, an X-platform, through these two devices, so the Lenovo Fink Edge SC7, we are working, porting our patches, customize their custom, the custom NVIDIA scanner, we are putting our patches on top of it to run EVE. We have already a version, a test version running on this device with the full support for the peripherals like USB, EMMC, and as I said, we can generate the image, just put on the USB stick, boot the device, and deploy EVE. EVE, if you start the installation process automatically and you deploy and you'll be ready to be onboarded to some cloud controller, because that's how it works. Like the onboard process, you register your device to your remote controller, and then that's it. You can assume, as you did that, you can deploy your Edge applications, you can deploy containers, VMs, you can create your entire, an entire virtual network by using EVE, just to sit here some of our features. And we also support the developer kit, because we think that's important for developers, so they should be able to get a platform where they can run, they can run EVE, they can develop EVE. That's why we think it's important to support also not only the Edge devices, the production Edge devices, but we think it's important to all support these developer kits to help and speed up the development as well. So the key takeaways for this presentation is basically the approaches that we are following, the practices that we are taking into account to try to keep EVE very uniform across different platforms, especially on ARM, where we have this diversity of SOCs of different manufacturers, different features. So EVE uses a common build system, which is based on Docker and Linux kit. Well, it's complex, yes it is, but we do support different platforms. It requires a minimal dependency on the host machine, so we don't want to rely on many software stacks or features from the host machine. It does support cross compilation. We can do cross compilation, and this is really important because when you don't have cross compilation, you can still build for a different architecture, especially using Docker, but there is no other way. You need to use the binary emulation to do that, and then this generates a lot of overhead. So if we're going to build a full image, without cross compilation support, like being a different host machine, let's say I'm on an x86, and I want to build ARM target image, or the other way around, I am on a MacBook with ARM architecture, and I want to build x86 image, EVOS image, so it can take a lot of time. Even though you are using a very good machine, very powerful machine, still, you need to emulate that architecture that you're running the compiler. By using cross compilation, we can speed up a lot because we are running natively the compiler in the host, the compiler will generate for the target architecture. The good thing is that EVOS provides this consistency between different hosts and target platforms. So you should expect the same environment, you should expect the same microservices on those different platforms. And ARM images, they can be provided as well for specific platforms, because of course, since we do have these different hardware, different controllers, different features on these SOCs, we must be able somehow to also provide support for that. And sometimes there is no other way, we need to change the layout of our image to be able to boot in some particular device. But we can do that, EVOS also supports that. And we can build and make the deployment of EV in a uniform way across all these platforms. So that's the approaches that we are doing, the goals that and the practices that we are following during the EV development. And I think that's it. I would like to finalize with a call out for collaboration. So just let's make our community grow. Just come join to us. EV is completely open source. It's available on GitHub. We have our project page. We have our week, our main list. We do regular Zoom calls. And we are available on Slack as well. So come talk to us, join us. Any questions that you have, any help that you need to deploy EV on your device, to port EV to our device, we will be very happy to help. And there's also this roadmap week page about our futures and the future of the project. And that's it. So that's what I've got today. And thanks for watching questions. Well, currently mailing x86. Okay, so what's the common environment that you see the deployment of EV currently currently mainly x86 may need those industrial PC, PC based computers. But we have seen more interesting from from different companies on arm, especially on arm, risk five, not that much to be honest, but on arm, especially if these new very powerful devices, like the Jetson architecture with the video process, the GPU stuff. And that's why we are investing more on arm as well. But mainly mainly PC and just for PC based computers. But of course, when we go to more to the IoT field, let's say, then this thing changes, then we see more arm devices than x86. Yeah. Okay. That's it. Thank you, Renee. Thank you.