 Hi, I'm going to talk to you about Ranex. Ranex is a new OCI-compatible containers runtime to start containers as Zen VMs. Now, let me go slower on this sentence because it captures a lot of meaning. So it's an OCI-compatible because it's implementing the runtime OCI specification. And these are runtime. So it can be used together with container D, for instance, to start containers as Zen VMs. What does that mean? Let me go forward quickly to one slide to show you this diagram to better explain the concept. Normally, a container is fetched by container D. And then container D installs the container on your host system. Then calls a binary called runC to run the application in the container. So runC is the component that is responsible for setting up the runtime environment and, in fact, is the one conforming to the OCI runtime specification. So runC, to oversimplify what runC does as a way of explanation, it cheeroots into the container, set up some additional Linux namespaces, and then runs the application. So runX is a replacement for runC. So it works as the same level. It complies to the same interface. It started from container D the same way. But instead of cheerooting into your application, what runX does is it starts a little micro VM, virtual machine, using the Xenia hypervisor to run your container. So this is not a large virtual machine. It's not like runX starts a virtual machine with Debian and inside even Docker and then running container that way. No, this is a tiny, tiny micro VM, just enough to run the container application inside a virtual machine environment. So there is pretty much just the container inside plus a kernel, just enough to run the application. So you can find runX on GitHub under the Linux Foundation Edge umbrella, so it's LF Edge runX. And the main purpose of runX is running containers as virtual machine for embedded. So it's targeting embedded that's the primary focus. So why running containers as VMs and what does it mean for embedded? A number of things. So first of all, running containers as VMs give you a lot more isolation and security by default. So it supports things such as multi-tenancy. But aside from that, it comes with a number of interesting features, key differences compared to regular containers to make it suitable for embedded environments. For instance, it supports real time, hard real time. That's because Xan VMs support a very strong form of real time, including even cache isolation for best performance and minimal latency. And you can easily configure one of these container VMs to be started with real time attributes. So if you care about real time, you can have very strong form of real times using runX. And another feature which is important typically in embedded is hardware access. So very often application need access, direct access to hardware, to accelerators and those are peripherals, GPIO controllers, all sorts of things. And you can give direct access to these resources using runX, as we'll see later in the presentation by giving direct access to the memory map regions of these devices and interrupts. So your container application can actually interact directly with the hardware in the way it wishes. So runX is an open source project, as I said, under LFH. So it's open for contributions. It has been open for contribution from the start. So just send an email with a patch to the mailing list, we'll review and commit. So there is no need to sign any licensing agreement. The license is a patchy P2 and we are open for contribution with a contribution model similar to the Linux kernel and then it's time to get into the implementation choices. So what does make runX unique? Why runX? Instead of another project in this space, we'll start it from the default, which is runC to other projects to start containers as VMs such as Cata containers. Why runX is different? Well, runX is different in a number of ways. First of all, runX aim at being very, very small, very simple, suitable for very limited embedded environments. So it's very simple, both at build time as well as run time optimizing for having a minimal overhead, minimal dependencies to reduce the size of the binaries as well as the startup times, boot times, suitable for environment, as I said, with limited resources. In terms of the build, runX strives to be as simple and as easy as possible. So the only build time dependencies that we have are GCC, make and go, the GoLang compiler. And the GoLang compiler is probably gonna go away soon as a dependency. So that means that soon we're gonna have just GCC and make and nothing else as build time dependencies. That means that it's actually easy to cross-compile runX. So if you install a Linaro cross-compiler, for instance, on your regular distribution, you can cross-compile runX on your X86 laptop for an ARM64 target quite easily. And you'll see it later in the slide deck. It's also important to note that there is no Zen dependency at build time at all. So what that means is that you can easily build runX on your AWS container, VM, any environment. You don't need to have Zen there. It can be a completely different environment from your target because runX is completely self-contained both in term of build time dependencies, but also in term of run time dependencies. So let's look at the run time dependencies. So of course you need Zen installed because otherwise you're not gonna be able to start Zen virtual machines. But aside from that, you only need Bash, JQ, Socket, and Demonize. So there is more, very limited amount of utilities. You just needed to do a little bit of parsing and that's it. So you can easily build in an environment like I often build on a Debian container on X86 and then deploy on an ARM64 Alpine Linux machine or on Yocto or in any other environment. Another very important point to make is that there is no dependencies between Zen and runX. So that means that you can easily upgrade Zen, change version of Zen independently from your runX version. You can build and run runX in a way that is completely independent from your Zen version. Now, this is very different from Cata container that used to have at least the last time I looked. Zen support by linking directly against Debux Cell then meant there was a built-in dependency against Zen and also a tie between the Zen version you built against and then you were gonna run against both of the user space tool and the IPadvisor that made the whole system much harder to build and to deploy. Other choices in runX that are different from other projects is that runX doesn't have any in-guest agents. So what runX does is it builds a very small build time, a very small Linux kernel, disabling most options because you know beforehand the environment that is gonna run within, which is your micro VM, your Zen little virtual machine. So you can disable a lot of drivers. So you build a small Linux kernel, then you build a statically linked busybox RAM disk environment. And this is just to provide just enough setup to set up the network and run the application in the container and that's it. So in other words, your user space environment inside the virtual machine created by the runX is your pristine container environment. There is no foreign agent, strange demon running just to connect to services on the off side. There is nothing just your container and a minimal kernel for politics compatibility. Speaking of the VM, the VM as well is as small as possible. So there is no device emulation is a minimal environment. There is no in-guest firmware or boot loader. There is no BIOS, no UFI, no ACPI, no grab, nothing just straight put the tiny Linux kernel and nothing else. Finally, runX builds against the OCI runtime specification. And the OCI runtime specification, first of all is a well-maintained spec, but is also very nice abstraction level. And we feel is like the right abstraction level to develop a component against. So that way it's gonna work not just with container D, which is our default testing environment. That's what we're working against today, but it works with any other OCI compatible container engines, which greatly increases your deployment strategies. So this is all different from other project that don't build against an OCI specification or for instance, they require agents inside your VMs and to connect to all services. Okay, let's look at the build. So let's say that you wanna build runX on your Linux distro, could be Ubuntu, Debian, Fedora, you name it. To make this thing more interesting, let's say that you wanna cross-build it. So you are building on your X86 laptop and you wanna deploy it on an ARM target, could be a Raspberry Pi 4, could be Xilin's board, any ARM64 SoCs. You just need to have a cross-compilation tool chain. So your ARM64 GCC, you can use the one from Linaro for instance. You need the Golang compiler and the package from your distro, it's fine. So you don't need to go any extra lengths, just install the Golang compiler. And then that's it. You just select Arch equal, you set the environmental variable Arch equal to Arch64, you set go root, you set the cross-compile environmental variable, you execute the build script and you're gonna come up with your build artifacts, your output and it's done. So what is the output of the build? The output of the build is a set of scripts. RunX itself, the entry point is the main script. You should copy and copy into the target under user bin or user's bin. And then a set of other scripts and that they need to be copied on the target to user share RunX. Among these, there is the kernel we mentioned earlier and the RAM disk that are provided for compatibility so that you can run your traditional container inside the VM. So it's quite easy to install on target. That's one of the thing I was trying to say because you just need to copy RunX to the target, copy user share RunX to the target and that's it. After that, the only thing you need to do is configure container D to use RunX instead of RunC. How do you do that? So in container D prior to version 1.2.9, you just edit config.tamo in case you didn't know, config.tamo is the main configuration file for container D and it lives under ETC container D config.tamo. So the only thing you need to do is set the runtime to user's bin RunX as shown on the slide. If container D is greater than 1.4.0, instead of the config.tamo configuration route, instead you have to pass minus minus RunC minus binary equal to the RunX path and that chooses a different RunC binary for you directly from the command line. Unfortunately, container D in between these two versions, so the 1.3 series of container D does not seem to offer a way to change the runtime but at least we haven't find one easy to use yet. So you have to resort to little hacks such as renaming RunX to RunC so that container D will execute it thinking it is RunC. Not nice, but it works. And now I'll leave it to Bruce to tell you how to use RunX with Yocto. Yeah, I'm going to go over how we sort of have integrated RunX to an extension or to wrap what Stefano was talking about about the simple build, how we can put it into a sort of deeper Yocto stack, OE built stack, if you will, so that you can both build sort of the platform, RunX, Zen, the guests and everything all in one build and then be able to deploy it. So that's what I'm going to talk about. But first, I should say a little bit about myself. I've been a maintainer in Yocto for 11 years now. I look after the meta virtualization layer in Yocto as well as some reference kernels and other things and hence why I was able to help do this integration. As you'll see, we've talked quite a bit about meta virtualization. And to put that in context in case people aren't familiar with Yocto and open embedded, I'm going to quickly cover sort of why would you use Yocto open embedded to build and then I'll go into a little bit of how it's pulled together in layers and I've captured some build examples and configurations that I was doing as part of our pulling this together and getting ready so we could demo it. So the question is, as Stefano was saying, sure it's absolutely great for the way Rhinox is built for local development, for quick development, for an individual developer. So why would you ever even bother to bring open embedded in Yocto build and deploy into the picture because obviously every layer you add it's a little bit more complexity, it's a little bit more time. And the big thing is that if you go check out the Yocto project and open embedded, if we have an integration with them then we can immediately leverage some of those open embedded and Yocto project core values, things like the ability to do the configurability, license management, it's of course, leverage its cross build capabilities, you don't need another tool chain, you can do fine grained image composition, tuning, all of these, there's all kinds of things to build your own embedded distro and capabilities that Yocto and open embedded bring to the table there. It also opens up a transition path from development to production. So you can take that while you're playing around with Rhinox and learning how integrating it with a deeper framework and launching containers whether it be container D or something larger, you can take that work, capture it into your recipe into a layer and then you have a way that you can hand that configuration to somebody else, you can go into a production environment, CI CD, all of those things are fairly straightforward to do once you've got used that if you're using the Yocto ecosystem integration. I would say that also what you get with this is with open embedded Yocto is that there's an active community and integration with some of the BSPs they might want to use, whether it be a Xilinx, a Raspberry Pi, a generic other type, any Yocto compatible BSP that's capable of supporting Zen, you can find a layer, you can find a kernel and you can use that community's BSP to build and get a platform. You don't have to go source all of those bits yourself and get a vendor kernel and figure it out. And the other thing that I find at least interesting is that we can do what are called multi-config builds. I'm not going to go into great detail on the mechanics of that, but what it allows within in the recent versions of the recent releases of Yocto is that you can do, you can build your Zen host, your guests, your containers and your firmware in a single configuration and a single platform. It knows how to switch tool chains. It knows how to compose different images and then bundle them all together and everything that you need to deploy. So you don't have to keep track of all the dependencies and revisions of the software and able to make a reproducible build. It does that automatically with a multi-config build. And as I've been hinting at, the build of RunX within Yocto, it's an integration. It doesn't replace all of the simplicity and the ease of build that Stefano was talking about. It is wrapping that into a recipe. It's using the same components for you. It's making sure the Yocto cross-compilers there. It's building that same NITRD. It's packaging the RunX scripts into whatever the package of your, your package manager of your choice is. And it's integrating them into a larger image build. But you can at any point do your development and build directly with that upstream LF edge RunX at any time. It no way replaces or changes that it simply integrates it into that one-stop shop. All right, next slide. To step back a little bit, I hinted at this when I was trying to, you know, describe why we would do this integration. So I would say that RunX within the Yocto project, if you will, it's open embedded core, plus a BSP, plus meta virtualization are these three big components. And in the Yocto project, those are their layers. And so those layers, they define the platform and all of your software stack possibilities. So these layers contain all kinds of things, much more than RunX, much more than you might want to use to build a system capable of leveraging RunX. We then have a way that we can have a distribution, configuration, package recipes and image recipes. They come in to play when you are customizing and picking, choosing how you want individual components to be customized and how you want the images to be composed. And what I was doing for the integration is, you know, we have baseline references for all of this that can be extended or directly used. And so for RunX, you know, OECore provides the base support. That's your tool chain, you know, your base packages, whether it be your core utils or your, you know, your reference kernel and these sort of things. And then it does the image construction and it provides the lowest level, all of the common parts of the build. When we layer on meta virtualization, that's what brings us in container run times and support. So, you know, I encourage you to go check out meta virtualization if you're interested in this and there's all kinds of different, both virtual machines, container run times and supporting projects that are found within the meta virtualization layer. So, you know, that's where we get container D, CNI, if you want to use that, that's where we have the Zen host image recipe as well as the DOM zero, it gets built out of that. And of course, a U-boot, an NITRD and image build are all defined in meta virtualization. And then finally, your BSP layer that comes into play is what provides your kernel, your boot loader, your firmware or any tightly coupled user space packages. And in the example that I'm going to go over, I'm using metasylinks so we can, as the example, BSP provider layer. All right, next slide. I captured a little bit of what I did as sort of a reference build. And that's on the ZCU 102. It can be booted on hardware and it can also be booted on QEMU, which is why we chose it. So it's quite accessible as an example. You can do this, you can spin images on anything that supports Zen in the right way. So obviously the Raspberry Pi is another good example of something you could build. And so I'm not going to walk through every single little step of this, but I wanted to point out, this shows what I was talking about in these layer stacks that we clone the main open embedded core. And in this example, it's using the Pocky reference, which is the Yachto reference for that. I'm bringing in meta open embedded for some higher level packages, meta virtualization. And of course, I'm bringing in the Zylinks open source components from GitHub. We initialize the build environment. This is standard Yachto stuff. We create ourselves a build workspace, if you will. And then we add the layers that I was talking about, which is so we need file systems, we need Python, networking, course meta virtualization, and then three parts of the Zylinks BSP. Those last three would change based on your target machine, what hardware, whether it's virtualized hardware or not. But everything else, the top part, it stays the same. It would always be done. And now this, it gets a little bit. So this is not everything that you would need. And it will be pulled into what we call a distro configuration, so it can a little bit more out of the box. But it's just showing, this is some of the things that you need to set in your local.conf, which is your local build workspace to say what you want to happen in the build. It's mainly here for reference, but I'll hit a few high points on it. We're saying that the machine that we're building for, which I mentioned, which is the ZCU-102, we, I mentioned a multi-config build being one of the good things for Yachto. We're defining one here to actually build some firmware that is needed to actually boot the platform. You could easily substitute that out to actually build a container that you were deploying multiple containers, different VMs, whatever you want. In this example, I'm only building the firmware because we're actually pulling down containers directly from Docker Hub to show that they can be run. It's turning on some distro features, as I mentioned, we need Zen, we need virtualization. And then it's making sure that container D and run C are in the image. After that, it's some configuration to make sure that we're using the Xilinx QEMU so we can boot the platform. And then it's all just, it's standard stuff. And then there's a multi-configuration for the PMU that I mentioned, and that just says, it indicates how to build that secondary bit of software that you need to boot. The one thing I'll say is that everything that we're showing, and for the Yachto integration, it is mostly, I would say 95% now available in the master branch of meta virtualization and any other patches to say OE Core or the projects they've been sent out. Stefano has merged things into Run X, for example, and so we're covered on that front. So it should be reproducible out of a build. If you weren't using multi-config because you're happening to be on an older Yachto project or for some reason it wasn't available, you can fetch some of these pre-built binaries from Yachto's, sorry, Xilinx's site. And I gave an example just so people will know there's an option to a multi-config build if they need it. And I'm not gonna go into any detailed all in those because they're automatically built if you use the multi-config. And then finally, you build your image. After all that configuration set up and in this I'm building two, you really only need that image minimal for what we're demoing with Run X, but core image minimal is a good test, simpler image to build to make sure that everything is seen in your configuration. What happens after that build? You know, it will go away and it'll churn depending on the speed of your build server. It could be first build from a clean build, it could be two hours because you're building everything. Subsequent builds will use what's called estate cache, it will reuse and only build what's, so subsequent builds are much faster. So you might have a bit of a long first build, but what happens is the outputs that we've defined get dropped into this deploy directory, which is in your build, it's build temp deploy images, your machine, that's where these files will be sitting. And so in the example builds that I talked about, I did a few listings, you'll get a Zen image minimal, which shows up as about 76 megabytes as a U-boot configured RAM disk, if you will, and which is the same, it's just the same as the compressed tire ball. It's a bit bigger if it's in the different formats and I'm highlighting here that we built in that configuration multiple different types of images out of the packages. You also would get the actual, the kernel, the boot image, which is, it's about 17 megabytes compressed in 56 megabytes in this image that UB, which is defined in meta virtualization to make sure it's U-boot compliant, if you will. And finally, we also actually built U-boot and so that's much smaller. So from what I said before, you get the boot loader, you get the kernel, and you get the minimal host image, that everything that you would need in order to deploy the system out of the single build. All right. There is some target configuration interaction that are required either for the boot, you may need to talk to U-boot or as Stefano was mentioning, you may need to edit the config.toml of container D. That's what I'm talking about there. There's still some things that we can make either into configuration packages or do different things, but there is a little bit of tweaking you still may need to do. And then in this slide, I showed how you would launch directly from your build directory, how you could test out what you did. So I have two, you use this command, which is a Yachto projects data command called run QEMU. I booted this simple core image minimal, it would boot to a prompt, a headless text-based console you can log in and have a look. And I also show that you would launch just by changing the image name, you could do run QEMU on Zen image minimal. And no graphic slurp says run it headless and use user space networking, much easier than setting up a ton in tap for most people's purposes. And that is how you would boot the two different images. All right? And so here I wanted to show that, yes, if you boot Zen image minimal and you're talking to U-boot, there is a little bit of configuration. I'm not sure if it's in our slides, but there is a image builder available. I think it is somewhere in the slides where you can get this generated. You do not write this by hand because it's error prone, but this is showing that for U-boot, it's showing that what you would have to do. You know, your TFTP image, you need to get the DTBs, you need to get the Zen RAM disk for the, and the core, and it's minimal image. You need to do a little bit of FDT, some device tree manipulations for DOM zero and Zen so everything knows where it is and then you boot it. So again, this is here for reference to let people know that you might have to do this if you're, and it would vary for the hardware you're using, but you don't write it by hand, there are ways to generate this, all right? And that's it for me, back to Stefano. All right, so we've seen how to build and how to run Ronex. As I mentioned, normally for traditional containers, Ronex will provide the kernel and the RAM disk. So why? So a container is essentially a set of user space Linux binaries. But if we want to be pedantic, a set of positive compatible user space application. So we need to provide that positive compatibility layer to run them. And that is basically what a kernel is for. So we need to provide that kernel to run inside the virtual machine to provide the positive compatibility layer. The little RAM disk is only necessary to set up pretty much a network and identify which one among the application in the container is the one to run. So both the kernel and the RAM disk are provided by Ronex. So when you do run me a container, please, Ronex go and find the container, then automatically set up a virtual machine configuration for you behind the scenes and call Excel create. Excel is a typical utility to start and stop VMs on Zen. Providing the kernel and the RAM disk itself. So Ronex is the one giving the kernel and RAM disk for the VM. However, there is no reason why it could be different. The kernel could come from the container itself. So why? Keep in mind that with Ronex, the isolation, the security boundary is the virtual machine. It's not the kernel. So the VM, the dotted box on the right is the security and isolation boundary. So it doesn't really matter from a security perspective what you run inside. So there is no reason why the kernel has to come from Ronex. The kernel could come from somewhere else. Ronex would be very happy to run that kernel instead to provide the compatibility, the positive compatibility layer inside the VM. So of course, the very good candidate for providing a different kernel is the container itself. So the container could come with a kernel inside the zone file system. You can imagine the container having a slash boot slash VM Linux binary. And then it will just pass it to Ronex. Ronex will find the kernel and use that kernel instead because again, Ronex doesn't really care which kernel to run inside the VM. So this is already supported now. And the only thing we need in addition to what I just described is a way to advertise the presence of the kernel or the presence of the RAND disk inside the container. So the container needs to be able to say, hello, I have a kernel, please run my kernel. And then Ronex could find this kernel and see how the container is coming with the kernel. And might as well use his own kernel and then use the kernel inside the VM. So today we are doing that using environmental variables but we really want to have proper labels or annotations specified as part of the OCI image spec. So why would you want to do that though? So is there a good reason? So you can imagine multiple reasons to do this. So first of all, you can have your own, you can decide to use your own special kernel version. You might decide to want to have the latest upstream or you might wanna have the rail kernel or the Debian kernel or you might wanna have a very specific driver that is not upstream yet or you might wanna have the real-time path set applied. So you might wanna run Linux RT. Well, as I mentioned, Ronex is built mainly for embedded. One thing that is very important in many embedded deployments is real-time. So if you are gonna set up the VM to be real-time then you might as well have also real-time kernel inside the VM. So these are all reasons but I can add that are easy to guess but I can add one more reason that might be less easy to guess, a bit more far-fetched maybe which is why not, not running an RTOS? So if you can specify your own kernel it doesn't have to be Alinos kernel. It could be a different kernel altogether. In fact, it could be a tiny Bermeter application or it could be a real-time OS. It could be a Zephyr kernel or it could be a free RTOS kernel. Package as a container and advertises the kernel to run and then Ronex, it doesn't really care if it's Linux or another kernel as long as it runs in the VM. So it will just call Excel create as usual and what you end up is with the VM with your kernel inside with your RTOS running inside. So now you have a way to use Kubernetes to deploy at scale on target your real-time application built with a real real-time OS using Zen VMs and all the, you could exploit all the configuration to get the best possible latency in these environments. Of course, if you're gonna do RTOS, it's very important to be able to assign directly hardware to them because that is where they are most useful so that they also drive physical resource. And we're gonna see soon how that can also be done. So in summary, as I mentioned, so Ronex comes today with support for container with a kernel. So the container can specify its own kernel version or it can also specify its own RAM disk version. So that is less useful so that the container can select his own special version of Linux, any version of Linux, but also non-Linux OSes from RTOSes to Bermetal applications and even proprietary kernels such as VxWorks. We're gonna see later a demo actually based on VxWorks with Ronex. So again, the long-term goal is to have proper flag standardized by CNCF. Today we're using two environmental variables to specify the kernel or the RAM disk called Ronex kernel and Ronex RAM disk. Device assignment. As I mentioned, if you are interested in real time, you are most certainly interested in driving a physical resource. How do we do that? So we exploit device assignment of Zen virtual machines. So Xen, if you're not aware, offers the ability to assign devices, device pass through directly to VMs by remapping the memory regions of these devices and the interrupts of these devices to virtual machines. So what we can do is we can basically run Ronex, Ronex passing a few extra arguments, asking Ronex to also assign a device to the container you're about to start as a virtual machine. How that is done is done by again, an extra flag. So you set an extra flag saying, these are my extra configuration options and Ronex is gonna read these options and is gonna append them to the VM configuration file and then start the VM with it. So if you're not familiar, Excel create that is the utility as a common line tool to start Xen VM, take a text configuration file with your virtual machine options starting from the virtual machine name to the amount of memory, number of virtual CPUs as well as any other configurations. And that includes, of course, device assignment. So we added an extra option that again is an environmental variable. This is different from the one before because it can only be set by the administrator. So by the user calling CTR for container D cannot really be set from the container. And you can customize any of these parameters in any way you like. So you can change the memory allocation. You can change the number of virtual CPUs. You can set real time related configuration and option as much as you like. And of course, most importantly, you can add device assignment, so device pass through configuration options, such as MMI original remapping, inter-app remapping and even adding device snippet to expose, to advertise a presence of a new device via device tree inside your Xen virtual machine. So everything I said so far is working today and is upstream in Ranex. What I'm telling you now is about our vision for the future. In the future, we would like to take these one step forward and actually allow a container to request a specific hardware resource. The container could come with another set of flags to say, please, I would like an Ethernet card to be directly assigned to me or this other accelerator to be directly assigned to me. Even more importantly, nowadays, many accelerators are in FPGA, you know, Xilinx FPGA, or they could be on a co-processor. So they are actually running their own software, their own kernel, or you must specify the bitstream and load it in programmable logic to have the accelerator actually come to live. So the idea would be for the container to come with the FPGA bitstream and then container D will call a service FPGA manager to program the bitstream into programmable logic. The new programmable logic block will become alive and then the resources assigned, the related resources of this FPGA block will be directly assigned to the virtual machine using Ranex device assignment. So that allows you to have a container to come with both the software to run on the ARM CPU course, as well as the bitstream to load the accelerator in programmable logic so that they can be used as a pair. But that's not just about bitstreams. So this could be used for co-processor kernels running on other foreign CPU clusters such as little Cortex-Ms, Cortex-Rs, ARM processors, or even Xilinx-AI accelerators arrays that are running AI kernel. So you could load this way, you're on AI kernel. So basically, this is an infrastructure to deploy using containers, not just software running on CPU course, but also the accelerators themselves in programmable logic or the kernel running on other foreign little processors. All right, I think now it's time for the demo to show you our Ranex runs. And I'm gonna first tell you what I'm gonna show you using diagrams. And then I'm gonna show you the terminal and running it live. So we are gonna use Ranex to start a little container with a VxSource kernel inside. And then we are gonna use Ranex to start a little Bermetal application with direct access to hardware. The Bermetal application has direct access to two hardware peripherals, the second serial, so the second UART, and also the TTC timer, which is a physical timer, the Bermetal application is gonna use for latency measurements. So I'm gonna now jump to my terminal window. This is Xilinx and PSOC board running here on my desk, just booted. Xen is running, Domzero is running. I'm gonna now execute this little script. It's just mounting a few C-group mount points and then it's starting container D here and loading three containers. One is an Alpine Linux little container. The second one is the Bermetal application I was mentioning and the third one is the BXworks container I was mentioning. I'm gonna first run BXworks. I forgot to run, of course, container D. Sorry, let me run the script, execute container D, import into container. I'm gonna now run the first one with BXworks inside. As you've seen, BXworks came to live, BXworks 7 by Wind River. Thanks again, Rob Woolley from Wind River for providing this container with BXworks inside. As a reference, I want to show you that there is a VM running. In fact, I named the same name that we gave to the container and also there is also, of course, a container running with BXworks. Second thing we're gonna run is the Bermetal application. So this is, as I said, the little Bermetal application with control over the second UART as well as the TTC timer. So we're gonna see here, this is a start that previous to here was the previous run on the serial, but this is a second UART and there is DTC latency measurement being done at the moment, they just completed with a thousand iterations on the TTC timer and with maximum and average intra-platency nanosecond and so on. So if I go here, you're gonna see that there are in fact two VMs running. The second one has only 64 megabytes of RAM instead of the default of one gigabyte. Why? We're gonna look because we changed the configuration using the text file that we passed as an extra argument to run X. So as you see, we set the memory to 64 megabyte. This overrode the default of one gigabyte. Then we mapped a set of interrupts here and to IOMM, to memory origin, the one corresponding to the serial and the one corresponding to the TTC timer. Finally, I'm gonna also run the Dampi Linux container and that's with the kernel and the RAM base provided by run X itself. At the end of the presentation, so if you have any question, please ask.