 Yeah, all good. Good morning. Hey, good morning. Let's give it a few minutes. I see some other people join. It's actually not a good morning. It's actually in the middle of the night over here. Oh, okay. Where are you in? Australia. It's two o'clock. 2am? Yeah. Oh, wow. That's why I'm looking a little bit more like my... closer to my pillow than my real face. Yeah, sorry. We're a bit tired. Okay, no worries, no worries. So if you... the camera is on my Mac here, but the screen on this side. So if I don't look at you guys, it's because I'm looking at the screen. Okay. Yeah, no worries. Yes, it's getting more difficult to wake up in the morning as you get older. I'm older than you, but are you talking about? This is... That's why I said I'm getting older. It's getting more difficult to wake up. I think Australia is probably not the best time for collaborating with other places. Yeah, I'm with other CNCF meetings and they're all like crazy times. Yeah, we were in the US early last year, but then again, coronavirus forced us to go back. There was no way. So insurance companies said we're not insuring you anymore and there was no way for us to stay in the US. I see. See what we can do this year. Where were you in the US? San Francisco. Okay, cool. And it was in January, February, so you could really see when we came in, arrived in January, it was still like normal, kind of a normal city. And then within like the one and a half months, it basically died down like the streets, all of a sudden empty restaurants were emptied and everything came to a stop. Yeah, pandemic, the pandemic hit. Yeah, hopefully this year will be better. My parents got vaccinated two days ago, so that was good to hear. All right, we haven't even started over here, but on the other hand, there are not many cases. I'm in Queensland, they are virtually none at all. I don't know, excuse us, but at the moment they have a few, but I think it's helpful at this stage. Yeah, I think they initially they have more in Australia, but they, it's now under control, right? Yeah, there was one other in particular in Victoria that was a huge outbreak and they locked it down for like three weeks heavily and then that was sorted as well. There's one or two cases every now and then and then there's a little like pocket of like 15 and then they close it down again and then this goes away. Yeah, it works at the moment, I think. Cool. Yeah, we can get started. This session is actually recorded, so we'll be available on the CNCF YouTube channel. So if anybody can, you have some questions about this, then you can also send them to that channel and they can see the session. Okay, good. I'll try and keep my voice at the level where people can hear me. So I guess what we've done, can you guys hear me by the way? Yep, yep. All right, I guess what we'll take you through is just we're going to keep this very technical. It's not going to be the typical salesy presentations that we do, Riccardo. I'll tell you a bit about our background. So we started the company actually in 2014. We built a graphic interface to integrate into a VMware vSphere client to manage containers. And we were using vSphere Chronos and Docker back then. We then pivoted to the company and we started developing a Unikernel. So we actually, Jens on the line, he's written a Unikernel from scratch. And like you can ask Jens, it's good that we have him on the line. Like when we mean from scratch, we really mean from scratch. And it was a single process Unikernel. It could be up in about 10 milliseconds in most public clouds. We have 785 kilobyte footprint. But what we also did is we built packaging and deployment solutions with this Unikernel because we had to find a way to package applications and run them, you know, manage them. And we did this because we wanted to address container security much like some of the other projects that are out there at the moment. So we took it to our first customers were running the stuff for us and they said, it would be great if you built this. We love the fact that you can package all the applications the way you do the fact that you can, you know, migrate containers. But we just cannot trust your kernel that you wrote yourself because, you know, the Linux kernel has years of bug fixes and everything in it. So we went up a lot. That sounds great. But what they did say to us is like, why don't you just use the Linux kernel? So that's what we did. So what we did in 2019 is we pivoted. We take the core or the root Linux kernel on www.kernel.org that kernel. We run any Linux application on top of it. But what you've done is we've taken the way we package to build the configuration and the way we create these mutable objects for the application packages. And we run them on Vota and essentially what you can do is we can run these containers or just applications as isolated virtual machines. But it's micro VMs. It's very, very small. It's very simple. What we've done engineering is obviously a bit more difficult. But what we do is we have the ability to run apps with security isolation of VMs. But we take the packaging and efficiencies of the container platforms. We'll take you through how it works and what we do and how we do it. All right. So very, very simple. When we present to our customers as well, what we talk about is the fact that we can take this stack and we reduce the stack basically to just your application running on the Linux kernel inside the VM. We make the application key in. The three things that we focus on when we talk to these customers is just about resources. So obviously, you know, where the kernel itself is, I think seven meg at the moment. Yeah, it's about seven meg. So that includes all our tools, all the configuration, everything that we've built into it as well. We run at about 34 meg memory that we allocate. And that's for the kernel itself, for Vota itself. And the CPU, you know, CPU can, you can have a look, I think we run at about 0.002% CPU utilization when we're idling, not doing anything. Obviously, it's about reduced complexity. There's a whole bunch of OS lifecycle processes that get eliminated if you don't have an operating system, because essentially that's what we have. And as an example, you wouldn't do operating system patching normally, because when you patch your applications, you're effectively patching the operating system as well because the rollout is us taking the application, binding it to the Vota kernel and then pushing it out as a VM. Increase security. Obviously, there's a lot of things that we can talk about that it's it really is the benefits of the, of the virtual machine isolation technologies but also then, you know, having stripped out or actually not stripped out, not having any of the components of a normal legacy operating system. You know, hardens a whole bunch of items on the CIS admin checklists. So it is really about the three things that we talked about customers about resources complexity and security. What we do talk about a lot is there's this idea that we codify the micro OS so instead of managing operating systems the way you used to do them, or even containers or anything. It's the same principles running containers. It is, you know, codifying the operating system codifying OS as a code applications as runtime. So we make the application basically the right time. And then we can push to any hypervisor out there. We support all the major ones like Zen, KVM, VMware, ESX, Hyper-V. We all type two hypervisors like VMware Fusion, Workstation Player, Virtual Box, Kimo, Firecracker. Firecracker we actually use heavily. And then we have Kubernetes as a runtime integration as well. So we manage our VMs by Kubernetes. It's just a simple container deintegration or runtime integration. And that is really what we talked about is taking everything that you used to do for the operating system and just building it as a code platform or a codifying operating system. I'm not going to spend too much time about this. If you understand the principles behind it, you're going to understand that security becomes more critical. It becomes fully portable. The packages are fully portable because we don't ship the kernel with any of the packages. It's literally like an application package. And then at runtime we obviously save costs because of infrastructure software and operational costs. There's three components to it. We've got the Vortals Studio or CLI. And this is the runtime CLI that you use to package it, to run it, to build it, to manage it. The kernel itself, most customers will never see the kernel because they just won't need to know of it. And then the build server. Our build server is like a repo server that will also offload the build server functionality into hosted clouds or private clouds or public clouds. It speeds up the build process because you don't have the latencies of uploading and downloading from your desktop. And you can instruct the build server to build and send into an environment directly. And the build server is distributed to build across any public or private cloud and it runs. And we depend obviously on the build server to push the images out. You can do it from your local desktop as well, but then there's latency and uploads and everything else to public clouds. Yeah, yeah. So how's this, so it's similar to something like Kata containers? Yeah, actually Jens, you want to tell them? The difference is Kata containers is running containers within a kind of predefined operating system. So they just have a little agent inside, which they use to start containers. Whereas this is not having anything around it. It's your app and that's it. So there's no agent or something on the machines. It's the app you define and that's it. That's all on the machine. So how do you talk to the VM, the application that is running in the VM? So you let that just happen or what mechanisms do you use to communicate between the house and the VM? For doing what, for example? I guess you want to have, I'm thinking about file system, for example. I'm thinking about something like Kubernetes, maybe volumes, right? So you have a volume on the host and you want to talk to it that it's inside this VM. Yeah, okay. So at the moment, the file sharing is the same kind of issue Kata has as well. The VM cannot share files between the host and the VM we're working at the moment with Firecracker on a VSOC implementation where you basically have a VSOC server running on the host and the VSOC client on the guest machine. Which can share files. So that's the idea tomorrow. Actually, I think I might have, sorry guys, we tick all the whole bunch of slides because we didn't want to, this is, that is how our Kubernetes integration actually looks. Okay. I think I've hidden the slide. Oh, I see. So you're using Firecracker and the VMM. That is obviously specific to the Kubernetes integration for any of the customers that we have that run it in a normal state. It's just virtual machines. So the same processes they have in EC2, the same load balances they have in EC2. I'm specifically talking about AWS obviously. Those all processes, everything stays the same. We don't really, as an example, I'll show you the process that most actually all of our customers have implemented as far as they build the micro-VM on the studio or the CLI. They provisioned using Ansible Terraform or VMware's provisioning tools and then they run it. We've only had one customer who's actually used the Kubernetes integration. We did it as a proof of concept for them. Most customers actually use just VMware, we see clients on the private clouds and the public cloud management utilities at AWS and Azure and cloud and everybody else has. It's very simple. Got it. So yeah, another question is, so this is a stripped down kernel too, right? So, something that you removed some of the device drivers or some of the overheads for some of these things that come up, come with the kernel and then you can run some applications for most of the main applications. So yeah, so I guess my question is what type of there's a kernel, right? So what is, what is it like? The kernel that has a few minor changes, I think it's, I mean, you can look it up it's probably 500 lines additionally and to start our own VND has a custom bootloader because at the moment Linux is usually having like this 1.5 stage bootloaders and we just use one stage bootloader so it doesn't do any like, what's this called? Yeah, it jumps straight to the Linux kernel and our little changes are only to mount a custom file system where our stuff is on and then it starts up. So that's minor changes. They're not really changes to the kernel itself. Yeah, I was thinking about this other project from IBM Lupine where they actually grabbed the Linux kernel and then they stripped down lots of parts in the code and then they recompile it and then it became this really lightweight kernel that didn't actually support everything but it allowed you to run most of the major applications, you know, like language runtimes and, you know, things like Redis or MySQL, right? No, like Ricardo, when we built this we had to support everything right from the start. So my Vortile runs, we support any ELF 32 or 64 bit binary and the most important part was we couldn't go to any of the customers that we had at least and told them that they had to recompile their software or the applications to run on Vortile because they just wouldn't use it. So that's why we chose the Linux kernel eventually is because you literally just drop the binary in and it runs. There's nothing else that you need to do is no special, I mean, obviously the only thing you would need to add is libraries that you would need. But that was the whole principle behind this is that you have full control over the libraries that you include in your Vortile machine. So we import certain libraries by default. It's just DNS library. I think it's LibDN. I don't even remember what the DNS library is anymore. Well, not by default, by the way. Well, there's a command to import it and I always run the command for every single implementation stuff done. So it's by default now. But yeah, that was the whole idea with this. And I'll show you a quick demo how it works. It's actually pretty straightforward. But the whole idea was just like you had to go back to the customers and tell them not to change anything that they're doing now and be the principle was that while we tell them about the security and isolation of VMs, we also want to give them the ability to run containers. We want them to be able just to pull from Docker Hub and run. And that's what we do now. You just give it a command, you tell the container to run, it'll pull it down, convert it to a Vortile VM and you can start it up. Got it. Yeah, that makes sense. So yeah, so typically if you don't remove anything from the kernel, you can just run anything. Yeah, yeah, I mean, if you stripped it down the default kind of set in Linux, you know, they're like keyboard drivers, stuff we don't need their mouse drivers and all you know what comes default with the Linux kernel of course removed all of that. And they're just the basic drivers to run on on clouds and server application. Yeah. And this is being a D this is actually the most important part to all of us. This is common knowledge it's on GitHub. Spotify open source project. Sorry, I'm losing my voice. And we just go through the four phases we do a pre set up a set up the post setup and launch. And like you said, we just months of special file systems. And in general, you know, generic resources like NFS, NFS gets mounted. NTP DNS, all the things that you would need just to start up our application gets launched by default. This is the Kubernetes integration we run on the firecracker. I mean, yes, do you want to spend some time on this is it necessary that as you can see obviously our build server is very important because what we do is we build images on the fly. We then inject information on demand based on the build server requirements so memory environmental variables, all the things that you would need to run. And then we run it on the firecracker. So I think anybody that's with you would understand how this works. I mean, the VSOC host fire share and share we are working on, but the rest should be the same experience. So the apps within your pot can talk to each other with local host. And so that should be pretty much the same. Another question is I container deals actually working on some integration with firecracker. Is that what you're using now so that. No, I think they're doing the same thing with an agent right. Okay. Okay. I think what the difference is, one of our things is that we can build disks very fast. So we about our own way of kind of stream building disks. So you will see it in the demo we can we can build it like a multi gigabyte disk in like few few seconds and a small disk is like under one second. So we just building whatever you need to build the whole disk with Linux your app, all your stuff within just a few seconds. And then you are you running it. So you're simulating like a container inside the VM to right. Or when you talk about Kubernetes. I mean you can have multiple containers in a pod right so are you going. Yeah, but there's multiple firecrackers in our case. So we don't have one firecracker and then start I think this is how Carter does it as well. So yeah, they have one VM and then they start the containers in there. Yeah, but we don't do that we start one container per VM. Okay. Yeah. Oh, you start one VM per container. One VM. Yeah. Yeah. Yep. We've spoken about this. This is pretty straightforward. And yeah, do you want to see it in action? Yeah. All right, let's have a look. So what I'll, what I'll command line, I'm going to show you the command line, there's a studio as well, which is like a graphic interface, but that's for enterprise customers. So it's actually extremely simple to use. So the first thing I'll do is let's, let's go pick a app you want to run. Let's pick on Tomcat because Tomcat is simple. So docker pool Tomcat. So what we can do is portal projects. I think it's convert container Tomcat from Docker hub and obviously this you can add different repositories. I'm just telling it to pull from a default repository, which is Docker hub and I want to put it in Tomcat Docker. There's no magic behind this regard. All we're doing is we're connecting to the Tomcat repo on Docker will pull down all the layers that we need unpack it and build the virtual machine. So, yeah, we've got terrible internet in Australia. It's going to take a while. But essentially when it's done, we'll just run the VM. And I'll show you the insides of it as well. All right. So it creates a directory called Tomcat Docker. We create this default VCFG file. So essentially what happens is the water configuration file is just the arguments. You could have split this up into binary and arguments. Environmental variables that we pass working directory and port mapping only used if I'm going to run it on my local machine because what I'm going to do now started up under virtual box. I'm actually just open a virtual box and show you what it looks like when I run it. And the rest of the file system you just saw is like stand is just absolutely stand. Whatever is in the Docker container. That's not adding anything. So that that's all Docker container stuff. So all we've done is we basically pulled it down with unpacked it converted it and we created the default or VCFG files and nothing crazy about it. What we can then use Vortal run Tomcat Docker. And what is we'll just do this. So what's happening in the back end is we're building a one gig disk. So that's the one gig disk done that's machine started. And that's Tomcat running. Very simple. So. Yeah, it's the most boring demo in the world. Whatever you see here on the logs. It's where you're sitting on the console. Now we like, obviously, there's a lot of stuff we've done that I can't just easy to show you. So I'll take you through it. Local house. There it is. So that's just mapping to my local house running on virtual box. But like we've done things. Just to me like this. If we look as an example, this is a my scale package that we're using. So I'll show you the VCFG file as an example. So in this case, it's my scale package that we converted from Docker again because we can see the entry point script. And that's the binary that runs with positive arguments. These are my scale arguments as an example. We built the database on the fly. But I'll take you through a couple of settings that we we've added in the config file as well for people to be able to use Vortal more efficiently. So we simulate users and super user privileges. So if the application absolutely needs to have super user privileges, you can say, well, privilege equals super user or some other type of privilege. We can redirect standard in standard out standard errors, everything. And this log files statement. So what we do is we say, for anything that's written to bar log my scale star. So if if my scale writes anything to that directory, instead of actually creating the file and writing to the disk, use the logging setting and stream the output somewhere. So in this case, what we're doing is if you see the declaration type equals programs, it's going to read the log files. And then it's going to send all the log files to whatever config you give it here. And this config is actually a fluent bit config. So I don't know if you know what fluent bit is. Yeah. Yeah. So fluent bit. So we have fluent bit built into the Vortal release. So any, any output, any output that's in here, we actually support as a logging output. That means what we do is we actually send all log files there. We can send kernel messages there. So, you know, it makes it completely stateless, basically, we can send system information there. So in this case, I'll start it up and I've got a got a Kibana instance running here. So we'll see the metrics coming in. You can have a look at CPU memory disk without actually starting up an agent. But then for this customer, they actually want to run as Abix agent as well. So what we do is we start MySQL, then we start Zabix agents. This bootstrap option is a Vortal option again. And it basically allows you to modify the app at startup. So you can do like a find and replace in files. You can tell the bootstrap command to wait for a file to be created first, then do something else. It's just, you know, programmatically. Well, actually within the config file being able to do some programmatic actions in the back end when the machine starts up. And then we have Ctl settings you can set different file systems you can change. What we've done for disk size as an example is very important. So you can see the little plus at the back or at the front of the config. So if you remove disk size, what Vortal will do is it'll build a machine big enough. Plus I think 10% yet. Yeah. Yeah. 10%. If you add the plus, then we will build a machine big enough to house your application plus the amount of disk space that you add additional in that plus command. Which means we try and minimize the disk usage on the machines. But also if you have something completely stateless like like Redis or, you know, or even if you mount NFS file shares somewhere else, or if you have a secondary disk mounted to have your disk stored separate to the virtual machine, then you can then you only need to build the virtual machine big enough to have the application on it. You don't need to store any log files because we can obviously send log files anywhere. So we get this idea of a completely stateless application running. Does it make sense? Yeah. What if somebody wants to grow the file system? Yeah. They're greater than your first time so just change the configuration and you don't even need to do that because I mean, in AWS, actually in any cloud you can just stop the machine and change the disk size and Vortal will rebuild the disk at start up with a new size. I actually doesn't even need to rebuild the disk just uses the expanded disk size. Yeah, well actually the, yeah, the EBS if you ever you don't even have to stop the machine. Well, yeah. That's true. But the initial disk. So if you if you create your disk and you upload it to to Amazon as an image and you started from the image. Like 100, 100 gig on startup we check if the image is as big as what the disk is and if not we just expand it before we start the apps. We'll get to that. And that that's the conflict file. So what we can do is we'll do Vortal run again. So there's no building a four gig disk and you'll see how quick this goes. And we'll bring up virtual box. So that's all good. Started. So it started my Zavix agents. It's starting my, my skill database now. It actually initializes the empty database creates the database with the conflict file that I have and then start. So that's my skill started. And then I mean in Kibana, you can see, there's a whole bunch of, well, actually, this is all stuff that we're testing at the moment you can see. But this is what the eight, the messages look like. So this is our system messages coming from Kibana from the machines. And then I don't need to shave pretty grass in Kibana. It's actually pretty simple. There's a CPU memory disk. Got it. My skill. It's one of these running. As you can see, we've been doing testing. There you go. There's one running. It's probably it. And just as one little comment. You don't have to run app. If you want to run containers within that machine, if you want to use a vehicle to run, you can just chuck your Popman on there and go for your life. Yeah, we actually run K3S. Who built K3S again inside? It's so early in the morning. It was inside the... Rancher. No, no, the... No, not Rancher. Goodness, sorry. No, it's Rancher guys. Yeah, it's Rancher guys. So we run the Kubernetes integration on our platform. And it's so easy because it's just... We spin it up really quickly, really fast. And there's a Rancher K3S. Sorry, I wrote an article on it to show how we do it. And it's basically you can have a Kubernetes platform from scratch in a couple of seconds. Because you can just follow these steps. We convert Kubernetes... Sorry, K3S and we run it up the portal. And essentially we have this running in like a Mac or we push it to VMware to AWS Google Cloud and those places and people just use it. Yeah, yeah. So is this... I mean, the use case... Edge type of applications like the K3S and the VM or something? Yeah, exactly. The whole reason we're doing this, like for the people that we work with are the larger ISVs. They don't necessarily want to run Kubernetes to manage full containers. And they already have virtualization platforms built on it like KVM and all those things. So this is an extremely lightweight alternative to run a container as an isolated virtual machine. More isolation, of course. Of course, isolation is a big thing. But it also, you know, the whole premise of this is that we drag into Kubernetes eventually so you can run a mix of the containers and the virtual machines without using the interoperability between the two. And you mentioned it. Sorry. I'm sorry. I have a question here at this point. So it's very interesting what you guys are doing. I got a question in regards to the workflow. It's possible to get a Docker image from Docker Hub and build the VM. I understood that. Is there anything that you guys are working on the other direction, for example, let's say that you make some changes to your VM, would it be possible to somehow save that content back into that Docker image that you initially started from? Second direction. Well, you could always, in theory, if you use it as a base, start with a Docker file in that directory. Because at the end, the first step we're doing, we're just taking whatever Docker has as a file system and go through the same steps when they start a container. You could add your Docker file there and start from, what is it called, from the nothing in it image from empty or something? I think it is. But you can still do the conversion all the time so you push to Docker and then you can convert and run it. That would probably be the easier way. Although once it's converted, you can, again, you can add your Docker file to that directory and just say add all these files. Yeah, I'm more concerned. I guess what I'm trying to get at is, how do I reuse this? Let's say that you have that VM, you make a bunch of changes in that VM, do you store, keep a couple, do you launch your cluster? How do you snapshot that? How do you save that? I guess that your VM image is already there. But that VM image now is going to be your environment, you will need to move that image everywhere you go, right? That's your source of information at this point, right? Yeah, that's why we had the build server originally, because the build server has like a repository built into it as well. So, I should actually, what are, well, the tories list. So, what are the connections, there you go. So, there's a couple of repos that we have like an AWS or there's a dev repo and you can push packages into and out of these repos. It's the same as Docker Hub basically, but yeah, this is stuff that runs somewhere outside. You can easily just download them, unpack them and run them. This repo of virtual disk images that you see? Yeah, it's actually not virtual disk images, it's the packages. So, let me show you how this works. If I take this MySQL as an example, what I can do is packages pack MySQL. And what it'll do is it'll actually create a portal of MySQL package eventually. I've lost connection. So, yeah, that's the whole principle behind this. I got it, I got it. Anything else? I mean, the demo is pretty, it's so simple and it's so boring. I'm sorry. But it is, it's really powerful. Is there any constrain in terms of the images that you can pull from Docker Hub? Let's say that you have a previous container image or just an image that is suspected to be launched as a privileged container. And it has a Docker in Docker, for example, in there. Is there any constrain, you know, when you launch your converter or your compiler, whatever you call it. Which images can source? Actually, not that we are aware of. And again, because we are getting whatever is in that image file on Docker Hub and getting all the commands it's supposed to run with the environment variables and everything. And so far we have, I think, all the stuff in the Linux kernel enabled in our, in a D that everything should run. Yeah, you know, it's IT, there might be something which doesn't, but so far, I haven't seen any. Okay, thank you. I can show you a couple of things. So this is an example is our, I'm sorry, this is how we mount NFS as an example. So at startup we mount the NFS and you know, you can write to your heart's content on the NFS. It's here. We tried to make this as bulletproof as possible. Actually, just as simple as possible to use. We've got a whole bunch of apps that we've tested and tried and used. And most of them are all Docker converted apps just because we don't want to keep rebuilding things. If you do want to build your own app, let's, there's things like S trace built into it. It's probably better if you have a look at the docs, the debugging side of it. You can run shell scripts. And what we do with the shell scripts is we, we actually include busy box, I think, yes. Yeah, busy box. If you run it with a shell command, it'll actually execute shell commands for you. You we've built in S trace. So if you start up the program with S trace set to true will actually run a stress on the app. So if you are missing a library or some skill library that gets called will highlight it to you. If you know what S trace does, you will work out what S trace, how we do it. We've got something called the import shared objects. So if you, if you run both our projects space import shared objects on the on a Linux machine, what will this will import the shared objects from your Linux host that you're running it on into the water package. And those shared objects are typically like I said, the lib libraries for DNS. The dynamic linker. Yeah. Yeah. And I think it was not like explicitly said, when it starts up, we don't have any shared object. There's like, if you create your disk, there's nothing on this disk. We have like of the first pandit partition, which is with our stuff. And it's, you can't mount it and nothing is some, the magic and then the disk you use. There's nothing on there. So if you, for example, it's to go lang and you do a static link and you want to run this app, then it's only this app is on that disk. There's no shared libraries. There's no linker. There's nothing. Yeah. No shell scripts. There are no users. There are no groups. Actually, you can see it in Manio. This is Manio that we run. And you can see there's only a Manio binary and there's nothing else. We don't even, yeah, literally with Manio, it's just a self-contained binary. There's nothing in absolutely nothing else. So it really is for us about like, well, it is about running as lean and small as possible and not putting the owners back on the user to try and work out what to do. There'll be, there'll be outliers where, you know, you have to run a stretch to find some obscure library that gets called. But in most cases, it's, it's pretty straightforward. One quick question. I'm not sure if you guys covered this, but is it possible to run from a Docker file? Import the image, not from Docker hopper, just from a recipe file. Oh, we can import from your local Docker, local Docker slash container D. So you can convert from your local service. Yeah, from your local Docker or your local container D. I mean, I was not referring to my local image. I'm just talking about if you can convert or import the image from a Docker file itself, from the file, you know, from the recipe file. Oh, no, not from the recipe file. No, it's not, not in the end. Okay. We'll always always always room for improvement. Of course, of course. Yeah. Question. So, I don't, anybody else has another question. So, but with Kubernetes, do you actually have anything specific on the YAML configuration to run it or this is just very like straightforward, basically, you just kind of the way you configure it, you just, you don't need anything. It's actually transparent to users. It's, it's, yeah, there, there are a few things in when you run this, when your machine start up in Kubernetes, which you don't see. For example, we wanted to support that all the virtual machines see each other's local house. So, we are doing some, some, some magic, of course, that if you were in the virtual machine said local host port ADA or something, you end up on a different virtual machine. For ADA, there's some magic in the background, but you don't have to change your YAML file or whatever needs to be done in the Kubernetes environment we change on the fly. Got it. Got it. Does that make sense? Yeah. Yeah. That's it. I think you get to just a little bit. You can download it and go play with it. It's, it's all there. Yeah, another question is, are you, are you planning to maybe donate some of this to foundation like the CNCF to get more traction or there's no plans yet? It's, it's open source. I didn't know that this is actually, oh, I think we are not aware of the different pathways into CNCF. I think that's, let's, let's answer you that way. Yeah. So with that question, I would say I wouldn't even know that this is a thing. I mean, the CNCF, the, you know, they host the projects, right? And there's, or there are different stages for the projects. You know, there's a sandbox, there's incubation, there's a graduated stage. So the idea is just to have a project hosted on a neutral foundation. Of course they open source components, right? Not, not anything proprietary. But the idea, the idea there is to, you know, help the projects, you know, gain more traction and more contributors and, and, and also more users, right? Yeah. So, and I just brought it up as maybe something to consider in the future, if you're interested. Yeah, well, I mean, we've already given some parts of it away as open source in any case. We're fine with that. If you can tell us who to speak to Ricardo, we're happy to do that. Yeah, so there's, if you look at the, the meeting notes, there's a sandbox application process, so you can follow the link and, and yeah, and probably go from there. But if you have any questions, you can, you can ask me. Yeah. Or anybody from the CNCF staff. Okay. Yeah, I got to step up and interesting stuff and bye bye. All right, cheers. All right. See you. Bye. Thanks. Okay, yeah, we'll take it away as a note and what we'll do is, yeah, when we, when we got we're going to go back to bed now sleep for another three hours and then we'll send you an email and ask you how we, how we get into that, that part of the program. Yeah, yeah. Yeah, I mean, yeah, it's something to consider. But I think it's, yeah, I think it's good stuff. I mean, it's useful. I think people want to streamline how they run. Some of the isolated VMs and with Kubernetes. And then, I mean, I've been working with the Cata containers project, but I see some of the differences here where it's all packaged up and then it may be, you know, more of a use case for people who want to have maybe that faster experience, you know, when they have it in package. Yeah, for us, the Cata, Tony, probably Cata contains we will perform for us with Cata contains was the fact that they use agents, because we said from the start that we never want to use agent error in any of our solutions. Yeah, so it's a different, this is a different approach. Yeah, yeah. And agent is, I think it's easier because at the end of the day you just like forward the request within the pot to your one VM and then the rest is just what it was before. Pretty much right. Where the networking setup was a little bit more difficult. If you run multiple VMs within that pot. Yeah, yeah. Cool. All right. Thanks a lot. Thank you for joining. Yeah. Keep in touch. Yeah, we'll send out. We'll send a follow up email. Yeah, we do. We'll do you too. All right. Cheers. Have a good day. Bye.