 Hi. Hey, Ricardo. Hello. Very good to meet you. Hi, guys, Rodney here. Hey, Rodney. There you are. There you are, Rodney. How are you, Ricardo? Thank you so much for, for, for having us for the invitation. Really appreciate the opportunity. Yeah. Thank you for joining and thank you for presenting. Yeah. Yeah. We met, I think briefly at the, I think I may have attended one of the sessions that you were. Oh, really? Okay. Yeah. And I just probably made a question, but just, it was very brief. Okay. Yeah. The virtual one. The last. The virtual one. Great. Great. Let's give it a, maybe a couple of minutes. Other people join. Good morning. Good morning. Where are you located? Are you in? We're both here in, in San Jose, California, Silicon Valley. Okay. Yeah. How about yourself? I'm in San Mateo. Oh, you're okay. Yeah. We have a nice day outside today. Finally. Yeah. Yeah. It's been raining. Yeah. Very cool. Very cool. Yeah. I'm just curious. How did you come across us? You know, one of the challenges that we're having is people knowing about us. I was happy to see your. Yeah. I, I don't remember exactly, you know, I've been actually reaching out to different projects, right? So. Yeah. Yeah. Yeah. Yeah. Yeah. In the space of. Run times. In. Machine learning and also some, some. Projects in the IOT or. Edge space. So. So yeah, so generally I, I look at. You know, new projects that are out there and go about, you know, reaching out to GitHub or, or sending out emails. So I mean the, the idea with the sig is, you know, trying to get more participation from community projects and. Get exposure and, and for the projects to get exposure. So they, they. They mature more, they get more users. And they grow their communities. Yeah. And then. And users also can learn about how to use the project. So at the same time, they can grow the project by contributing to it. Or by helping finding more contributors. Sure. Well, thank you. Yeah. Thank you again for the invitation. I think. One of the things that we're definitely. Looking for is a little bit more exposure. You know, we don't necessarily want to get overexposed at it because we're a very small team yet. But some exposure will definitely help us, you know. Yeah. Yeah. Yeah. It's a, yeah. And I think it helps. Overall, like. You know, companies supporting open source projects, you know, if they can. You know, they can start using it and then it also could be become more of a business model for the company too. Right. And that's right. The company can do well, then open source can do well. That's right. That's right. That's right. Yeah. Cool. So yeah. Let's get started. All right. Well, hi everyone. My name is Rodney and I'm the founder and CEO of a company called Nesty box. My co-founder Rodney Molina is joining us right here. And we are a very young company. Started about a year and a half ago. Just went through the white combinator accelerator in the summer. And Rodney and I are both software engineers or low level system, but I'm a high level engineer, you know, running was a LinkedIn for many years. I was a VMware and their ESX sacrifice routine. And we left our jobs to found Nesty box with the idea of enhancing containers, particular Docker containers where maybe, you know, pots such that they can run not just micro services, but any workload that would normally run, let's say on a virtual machine, you know, and that they can do that securely, you know, without the need for privileged containers, right? We saw, you know, there were use cases where people wanted to do run things in containers that normally run in VMs, but they were always resorting to privileged containers, plus these complex configurations, complex Docker run commands, complex entry points. And we're like, why is it that the container itself cannot just create an abstraction, right? So the processes that are running inside of it can seamlessly run all that type of software, right? That was the gist of the idea behind Nesty box. And what we've developed is a low level container runtime, a fork of the OCI run C, you know, that takes the first steps in that direction, right? And I actually prepared a presentation if I can share where I go through some of these, this runtime on what, you know, what makes it unique, what are some of the benefits, how it works a little bit under the covers. And I think that will help us get the discussion, would that be right, if I present that? That was great. Okay, let me do it. Okay, are you guys seeing that right there? Yeah. All right, perfect. Yeah, so the runtime that we developed, it's called sysbox. Everything is box with us, you know, Nesty box, not sysbox. And it's a very young runtime, but it's getting, it's starting to get some adoption, so we're very happy about that. I'm going to briefly talk about what is sysbox, some of the features and benefits, use cases, I'll give a little demo. I'll talk a little bit about the different flavors that we have. We have an open source version, and we have our commercial version, a little bit about the high level design and then some limitations in the roadmap and also some related tech. We're happy to take questions in between the, as we go through the presentations. So as I mentioned, this sysbox is this low level container runtime that we've developed. We forked it from the OCI RunC, which is a standard runtime used by Docker and Kubernetes in 2019. And it tracks it very closely, right? So as changes come into the OCI RunC, we cherry pick those into sysbox itself. It is an open source runtime, you know, although we have a commercial version that complements it with proprietary close source features. So we're using an open core model for the commercial version. As I mentioned, the goal is to enhance containers to enable them to run the same workloads that would run the same on virtual machine or even on a physical host. Seamlessly and securely. Seamlessly means without any complex configurations required to just deploy the container and you can use it almost as a viewer of the end, right? And securely, meaning with strong isolation between the container, no more privileged containers, right? In order to do that. Here's a diagram that explains a little bit what the way sysbox works, right? And I'll go into more details later, right? But basically you have a host machine, it has to be a Linux machine right now. And, you know, in it you have Docker, Kubernetes or even Portman nowadays and sysbox sort of lists below them, right? It's a low level runtime like the run C. And so you use it like with Docker, you do Docker run and you pass it a flag saying use sysbox instead of the regular OCI run C. And you can pass it any image that you want. There's no requirement whatsoever that sysbox places on the image, right? Any container image would work. And then what sysbox creates is a container that is capable of not only running microservices like all regular containers, but it's also capable of running what we call system level software. Things such as system D, Docker itself, even Kubernetes. All this software that traditionally requires that either runs on VMs or it must run if you want to run it in containers it must run in privileged containers. All of that software sysbox is now capable of running with strong isolation using the feature of the language called a user namespace, right? So sysbox always uses the user namespace for all its containers, right? And that gives it, you know, that gives you a full road inside of the container that has zero privileges outside of it, which in turn that coupled with a bunch of other techniques that sysbox uses as far as always virtualization enables this type of software to start running inside of the container seamlessly. No change to that software is required, right? So you should be able to, for example, deploy one of these containers and install the software if you wanted just like you would on a VM and you should install and run perfectly fine, right? As I mentioned, no complex setup, meaning no custom entry points into the container, no complex Docker run commands. The runtime is taking care of setting up the abstraction of the container such that it really starts resembling that of a VM in many ways, right? And that's enabling that type of software to run. Any questions at this point? Or is it pretty clear? Yeah, so I'm just trying to make a, like I'm trying to understand how different this is compared to some of the other run times like Kata containers or, yeah, firecracker, right? Yeah, sure. Let me answer that. I do have a slide at the end that I list a lot of other alternatives, but let me answer that. Sysbox is pure OS virtualization. There is no VMs involved, right? There is no hardware virtualization. So that immediately differentiates it from Kata, from firecracker. There is no hypervisor required here. Okay. Pure OS virtualization, right? What it is doing, however, it is, the name is Kernel already supports OS virtualization through namespaces, see groups, right? It is, but it's not enough. What the kernel has is not enough to enable containers to run this type of software. Sysbox is complementing all of the holes that are in the Linux kernel, such that it creates a container an OS virtualized container that is able to run this type of software. In other words, you make it look like the processes are running inside of the container as if they had their own kernel when in reality they're sharing. Okay. So it's like user mode kernel or something, user mode? Well, we don't have our own kernel. I mean, we still, it's the one Linux kernel there, right? It's just that Sysbox is, understands, you know, where the holes are in the Linux kernel and it's trying to cover those as far as namespacing is concerned, right? But still one kernel. There's no user mode kernel. There's, there's, there's just the, just the runtime setting up the container in a more coherent way. That's what's happening. Gotcha. Gotcha. Gotcha. Gotcha. Gotcha. Yeah. So it is very different than Cata or those other runtimes that rely on VMs. This one does not rely on VMs, which it gives a key property, which is it's not dependent on a particular hypervisor and that gives it extra portability because then it will run whatever Linux is running, right? Yeah. Some of the key features that it has at a high level are, it has what we call fake root, right? Meaning the Linux user names, right? Root in a container has full capabilities, but only inside the container, not outside of the host, right? Not too confusing with rootless containers, right? Because there's something called rootless Docker, right? Which is Docker itself is running without privileges at the host level, right? This is not that. We're not going there. Ceasebox requires root privileges at the host level, right? But the containers that it generates are capable of half a fake root inside, right? And are capable of running software like Docker itself, right? So it gives you a fake root on all the containers. As I mentioned, there's no special container images required. There are no complex Docker commands or entry points required into the container. It assigns exclusive username space ID mappings to each container for extra isolation. For example, if you launch, you know, I'm not sure how familiar you guys are probably with the Linux username space, right? In which, you know, there's an ID mapping, like rooting the containers map to some ID on the host that is fully unprivileged, right? So it's a user ID and host. And what it does is to each container, it can give exclusive range of host user IDs that are non-overlapping. And that creates, that creates, enhances cross-container isolation, right? So it's capable to do that, but we only do that in the Ceasebox Enterprise Edition at this point, right? So we're leaving some of the features that we feel are more enterprise-level. We right now have in the Enterprise version that we can have something, you know, to keep the lights on, it also supports what we call preloading of inner container images into the outer container, right? Because a lot of people, what they're running inside of the Ceasebox containers, it's Docker itself, right? They love to run Docker in sort of an isolated environment without having to resource to VMs. They're using this outer container and they're running Docker. And then the next question is, hey, that inner Docker does it have to always pull images from the network or can it, when I launch that container, can it really can preload it with all of these inner Docker images? Right? And the answer is either, you know, you can have the inner Docker pull them from the network or you can also easily preload them into the outer container using a Docker file or a Docker commit and I'll show you that in the demo. And then it's also capable when you have one of the problems that arises is, okay, now you have multiple of these containers, they're all running Docker inside, for example, and they're pulling their own images, but those images can be pretty heavy, you know, they can start consuming very quickly, a lot of space on the host. Is there a way to share, you know, the inner container layers across, you know, for example, you have two of this is what container that you have an instance of Docker on each and they all have, they're all using the same images. Is there a way to share those layers, you know, those image layers between those and the enterprise version has a technique that uses to share those layers and that reduces storage significantly on the host, right? Some of the benefits that we see as a result of SysBox are, number one, it hardens container isolation because it always uses the Linux user namespace. Even if you don't want to run Docker or Kubernetes or SystemD inside of a container, even if you just want to run microservices, SysBox gives you a hardened container already, right? A fake root container with exclusive user ID mappings in the case of the enterprise edition. Now, because it's able to run things like SystemD, Docker and Kubernetes securely inside of the container, that opens up a bunch of new use cases for containers, right? It sort of bridges the gap between a container and a VM and it gives users an alternative to a VM, a container-based alternative to a VM. Something that is more efficient, faster to deploy and, like I said, more portable because it's not tied to a particular hypervisor. You know, when I deploy a VM on AWS, that's it. That VM is not moving from AWS ever, right? It's an AWS VM. These things are, they're bridging the gap between containers and VMs. In a way, they're bringing containers as infrastructure as code, right? They're enabling containers as infrastructure as code. It voids the need for privileged containers in many, many scenarios, right? So if you're using a privileged container right now, let's say your CI or some other part of your infra, you are putting your infra at risk. This technology voids the need for that because it gives you a container that is strongly secure and yet is capable of running most of the same workloads that will run in privileged containers. As I mentioned, it gives users a fully capable root inside of the container. And that is very helpful because a lot of people also have headaches with, hey, you know, for security purposes, I don't want to be root inside of the container. But then, if I'm not root inside of the container, there's many things I cannot do inside of it, right? This says, hey, you know, just be root inside of the container if you want to. And, you know, the container will still be well-secured because of a fake root thing. And it reduces infrastructure costs because by virtue of replacing VMs in many scenarios, not in all scenarios, right? We're not saying that VMs are going to go away. But we're saying it does give you an alternative, a very compelling alternative to a VM. And by virtue of replacing VMs, you are going to be saving infrastructure costs because of the superior efficiency. So for example, with Rodney, we did an analysis on this and we have a big test server with a lot of cores and a lot of memory. And we found that we can put twice as many of these system containers as we can put VMs running the same workloads at the same performance, right? So it sort of doubled the capacity of that machine, right? Here are some of the use cases that we're seeing from our adopters. The most common one right now is CI CD use cases. Why? Because a lot of people, the Docker in Docker and Kubernetes in Docker, rather than show up in the CI CD use case, right? The jobs are containers, but then the jobs need to run Docker itself or in some cases even deploy Kubernetes clusters. So it's very convenient to do that. And prior to Sysbox, people were using privileged containers. And Sysbox gives them a way of saying, hey, now I can do this a little bit more securely in my CI info. We're also seeing adoption as dev environments. So people are deploying this container and using them as dev environments. One of our early adopters is a company called Coder that does remote-based developer environments. They create containers and they're behind the scenes, you know, to provision those environments. But those containers were fairly limited in what they could run. With Sysbox, it opens a lot more workloads that they can run inside of those environments. So they're using that already. And as I mentioned, it is, you know, container-based infrastructure as code, right? We are seeing also people that say, hey, instead of having to deploy a VM, you know, can I use one of these containers? It's a lot more natural to a lot of people that are already in Cloud Native to use a container than to use a VM, right? It's the Docker file. It's a simple Docker run command. It runs on the cloud. It runs on the machine. It doesn't require nested virtualization on the cloud, right? Voice and need for nested virtualization on the cloud. And it's very efficient, right? So we are seeing that also. Yeah. One thing that I have about this is, yeah, this is really interesting because projects like Cata containers or Firecracker, they're required as to virtualization with this. Correct. And then, so... Correct. If you want to get a lot of adoption and publicize it out, right? Exactly. You're required. And when you're doing nested virtualization, not only is it a little bit painful to set up on the cloud itself, you're wasting your cloud computing cycles, right? You're wasting your cloud computing cycles in emulating hardware and software, right? You're paying for those cycles, you know? This is a much more efficient way of doing it, right? Hey, we're not saying that this is equivalent to a VM. In some cases, you do need a VM, right? If you want to have a special kernel, you want to write different OSes on, you know, you do need a VM, but this gives you an alternative that in many scenarios can replace a VM with a much more efficient one, right? Let me tell you a little bit of how things are, what it does out there, the covers, and then I'll show you a demo and give you a little bit of how it uses and a little bit of what it also shows you, what it does under the covers. So some of the key things at a high level that it does under the covers, it uses a kernel module called ShiftFS, it's right now only present in Ubuntu kernel, okay? That allows us to give the fake root inside of the container access to the container file system. So what's happening there is that Docker normally, you know, sets up the container file system under the covers, under varly, Docker, and everything is root, everything is on by root in those directories, right? But then when we create the container, the container processes are not root, they're root inside the container, but they're not root inside the host. So how do we make those non-root processes at host level access root file system? You can't ShiftFS, so that's the glue between that, right? It allows us to, we mounted inside of the container and whenever the container processes are accessing their file system, their chroot.jl, ShiftFS enables them to access those root file systems that are set up for them. That doesn't mean that sysbox only works on Ubuntu, right? It means that with Ubuntu, you get this, it works without any changes in Docker. Without Ubuntu or more accurately without this module, you do need to put Docker in what is called user NSREMAT mode, right? Which is enabled the username space. But in Ubuntu, you don't even need to do that, right? You can just use Docker as it comes. And that's desirable because that way you're not, you can run sysbox side by side with your OSI run C and not worry about, you know, having to make any changes to the Docker config. It also does what we call partial emulation of the procFS and sysFS file system. Those are slash proc and slash sys, right? And by that, we mean that what's happening there is that, you know, that those, those file systems have many resources with which applications communicate with the kernel. In particular system level apps like Docker, Kubernetes, system D, they touch a lot of those files on the proc, right? And many of those files are already namespaced by the Linux kernel, which is great. But many are not. And sysbox, for the ones that are not namespaced, sysbox does the sort of interception of the access and does the namespacing on behalf of the kernel, if that makes sense, right? And that is a key component of enabling all of these other apps to run inside of the container, okay? It also does selective syscall trapping. So in general, we don't want to trap syscalls because they immediately affect performance. But for certain syscalls in particular, control level operations that are, that seldom occur, that are not sort of, there are more control path operations rather than data path operations. We have to trap them, right? In particular, for example, the mount system call. That's a very important system call for us to trap. Why? Because as you will see, inside of the container, we are already emulating slash proc slash sys. But then inside of the container, someone may say mount proc somewhere else. And the proc that has to show up is our emulated proc, not just the proc from the kernel. Otherwise, the radars also do it all. So that's why the trap, the mount syscall gets trapped and we mount ours, right? And that happens inside of the container or any other inner containers that may happen, that may be inside of the syscall container, right? We're always trapping. And that emulation is very, is not easy because we have to then figure out, okay, that process, what name space is it in? How are we emulating, right? You know, it's, that's where a lot of the meat of sysbox is in this area. Okay. And Rodney is the main responsible for all that stuff. Okay. And sysbox also sets up some implicit mounts in the container, right? So even, you know, what as soon as you deploy a container is already setting up a bunch of other mounts that you've necessarily, the user may not have asked for, but that it knows processes like dockers, existing D, Kubernetes rely on in order to work properly, right? These are things that you normally would find on a regular VM, right? And so it's setting those things up for you inside of the container. So all of these really amounts, again, creating the abstraction of the container in such a way that it most, more closely resembles that of a virtual machine or that's what's happening here. Let me, if I may show you a demo, okay? Let me now stop sharing the screen and let me share my terminal. Any questions before I go on the demo? Nope. Yes. Anybody else? Okay, can you guys see my terminal right there? Okay. You can, right? Yes. Okay. Is it too small or too big? The phone. You can make it a little bit bigger. Yeah. Okay. Good. Yeah. So, you know, you go to the GitHub side and you can download. It's just for free. You know, it's out there on the internet, right? And once you download it, for example, here, I have a few versions back here. Once you download it, you simply install it like that. In this case, I'm installing the enterprise version, right? But because it's a little bit faster as well. So you just install it, installs very quickly. And the installer already sets up Docker in such a way that Docker already knows about it, right? So it installs like that. And then once you install it, the only thing you need to do is now use Docker. So always accept that you need to pass that flag. Right. That's the only thing you need to do. But once you do that, you're going to get that enhanced container at that point. Right. For example, now you can pass any container image you want. We have sent many references, reference example container images on our Docker app site. One of them, for example, this one has the Ubuntu focal distro in it. It comes with system D in it and it comes with Docker in it. So it's like a mini VM, right? A mini VM with Docker. So you launch it and there starts going out that system D sort of starting to initialize. And now you are inside of the container that starts resembling a host, right? You see system D has already started a bunch of services, you know, including Docker itself. Docker is alive right there. This is the showing you that this is a fake root container, right? Routing the container map to some user that Sivoc has chosen at host level, right? So that already gives you the strong, the strong isolation. You can create inside of it inner containers, for example, with Docker. It's going to run at full speed because there's nothing that we're doing to slow down Docker pretty much there, right? So what does it look like from the host? Yeah. So I split the screen. I'm at the bottom, I am at host level. You only see the outer container launched by Sivocs. You don't see this Alpine container that I'm inside, right? It's totally encapsulated. Totally encapsulated inside of that inner container. You see the processes? The processes, like if you do a PS. Yeah. This is kind of it. This may, I may need to bring down the font, okay? But so let me show you what's happening. There you are. You can see this is the outer container right here running system D, right? This is the one that was deployed by Sivocs. There's container dishing. Look at the system D. This is everything that's running inside of that container, right? All of the system D demos. And here's the inner container. You can see it right there, right? There's the inner container, the Alpine container right there. So you actually see all the processes? Yeah. Yeah. So that's a nice property actually we think because it allows a system administrator to have a full view inside of what's happening on all of the containers, right? No matter how many levels of Nesting there are, you have a full view from the front. Each of the levels below has a full view as to inside. But from the inside, of course, you cannot see anything, right? So from here, well, you can see, right? From here, you cannot see, you can only see itself, right? That's the container inside the container. That's right. That's right, right? You cannot see it yourself. And that is also a nice, I think from a security perspective, this may be something important because, you know, with VMs, you're sort of opaque, right? Like if you are the hypervisor, we cannot really look what processes are necessarily running inside of that PM, right? It's an opaque abstraction. Whereas with containers, at least from the holes, you can see what's happening inside of the container. So that may help with monitoring. Does that answer your question? Also, here, I'll show you here. You can see that the user namespace is right there, right? On the container that Sysbox creates. In the inner container, there's no user namespace. That's a container that Docker is creating with this, with the OCI run. It's a regular container, right? But this user namespace ensures that everything that is running here is isolated from the host. Does that make sense, right? Yeah. Let me also show you, for example, one of the things that I had... Hazy, sir, can you increase the font? Yeah. I'll do that. So again, back here, I'm again inside of the container. I'm going to decrease just a little bit because I want to show one thing inside of the container, which is you can see that under proc. There's proc inside of the Sysbox container. You can see that Sysbox is right there, right? Indulating certain things inside. In particular, the proc is hierarchy. There's a lot of files under there, right? And many of those are being emulated by Sysbox itself, right? So the slash proc model has that. And then as I mentioned earlier, for example, if I do... So if I become rude inside of the container, and I'm here and I make... And I mount proc, right? And I do find... Again, I just mount the proc and then add this new directory and then again... You should see Sysbox right there, right? It's a Sysbox back. That shows you that the Syscall is... What's intercepted, basically, right? From the container. And so that we were able to do what we needed to do, right? Now, I showed you how I use Docker to deploy an inner container using Alpine, right? So now there's inside of the Sysbox container, there's an Alpine image. One of the things that you can do with Sysbox is you can go at the host level where you don't see the Alpine image, right? You only see the outer container, and you can now do a commit of the outer container. And what's going to happen is that the commit is going to capture that inner image, okay? It's like a snapshot, boom, you know, and it captures that inner image. So now if I launch the commit... So now I'm going to launch on the bottom a new container using Sysbox, and the committed image. And it should start SystemD again because it's the same image, right? Same thing. But now I should see inside of it no longer empty, there's the Alpine image, right? So it captures it. So it's able to capture it with a commit or you cannot do the same trick with a Docker file also, right? So when you build your outer image, you can start inserting inner containers in it very quickly. Very useful for many JCD scenarios. So now let me exit. Let me stop both of these containers. Both are stopping right there. Is this good enough? Would you guys like to see also maybe a Kubernetes example running inside of the container? Sure, sure. So for Kubernetes is the same exact thing. You know, we have a reference image that comes preloaded with all of the Kubernetes components. I'll show you. This one takes a little bit more because there's a little bit of setup that is happening underneath, but there is running already. By the way, this is slow because it's running on a VM inside of my laptop. So, you know, on a faster machine, a little faster. Now, I launched the image that has the Kubernetes. As I mentioned, it comes preloaded with all of the Kubernetes components. Look at this. If I do Docker image list, look at all the stuff that is already came inside of the Kube proxy controller manager. All of the Kubernetes run all of these things in pots. It's able to run all of these things in pots themselves. These are all the images that are going to create those pots. And so now, in order to start Kubernetes, you would do sudo, and then you would use the exact same command that you would want a VM to set up Kubernetes. That command I have here. One second. It's in that one. The Kubernetes. So if we're doing things right, there's nothing special that you need to do in the container. The exact same sequence that works on a physical machine or a VM has to work inside of the container to install Kubernetes. And there's Kubernetes time to boot up. It's going to tell you, hey, I'm going to wait two, four minutes, but it's going to actually go way faster, probably 30 seconds, because it is going to find that it has already all of the components that we don't need to download. They're already there. So it doesn't mean to pull all that stuff. So you give it a few seconds. So the way you would create then a Kubernetes cluster is you would launch multiple of these sysbox containers, right? Connect them through a Docker network or an overlay network. And each of those containers represents a Kubernetes node. That's what's happening. In this case, this is becoming like the master node, right? You see it's done already, right? Now you can just, normally you would have scripted, but I'm showing you the minor way to do it. And normally all of this would be scripted so that it just comes up right away. So that's the node. It's not yet ready. It's about to get ready. Many of the pods are running. It's waiting for the plan to get initialized. Let me do that, by the way. So you can just apply the plan now. Again, the exact same procedure. You will follow this. It's starting to do its thing, right? And so what's happening is Kubernetes is doing this inner working, you know, setting up its pods and stuff like that. And soon you're going to start seeing everything coming to the ready stage. Now from the host itself, again, you only see the outer container. At the host level it's pretty clean, right? Everything got encapsulated inside of that container. And I can launch another one of those and create and join it into the cluster, right? I can launch as many as I can and join them into the cluster with a simple command. But you can see already Kubernetes is running and it's looking good inside of that container, right? Any questions here? I probably should stop here. Let me stop here as I'm running a little bit on time. Any questions here? Yeah, I think this is great. I mean, you're actually getting that experience of a host level machine. Yes. I mean, there are other alternatives, like mini-cubes or kind, but they don't actually run like a machine. They're just kind of like... Yeah. And kind uses privileged containers and uses very complex image entry points, right? That they've set up, right? So because of user privilege containers, you already put your host at risk immediately, right? And then mini-cube, well, you can have both the VM mode without VMs, right? It goes either way. But in other cases, either using privileged containers or using VMs. To our knowledge, this is the only... This run on the Mac too? I mean, like a Docker for Mac or something? No, we haven't tried it. We do need a pretty recent Linux kernel, 5.05 or above, in order to do some of the tricks that it's doing. Do you know, Rodney, if it would run on a Mac or...? Well, we haven't really tested the Mac, you know, but obviously as long as you have a Linux VM, you can do anything that you do in Linux. Now, when the system mentioned 5.05, that is excluding the Ubuntu. Ubuntu, we are fully compliant with what they have since 5.2. So, yeah, for none of Ubuntu kernels is 5.5. And when it comes down to the Mac, as I said, you know, if you have a Linux VM, it should run too, yeah, of course. Yeah, yeah. But we have to be very excited and, yeah. Similarly, you don't have dependency on the underlying hardware, so it doesn't matter whether it's an x86 or an ARM platform, you should run similar... It should not matter. There's no dependency on the underlying hardware. Now, some people have asked for ARM. We haven't had a chance to try it yet, but that's exactly the answer we gave them. Yeah, and actually, there are a couple of engineers working on ARM testing systems right now. We haven't heard from them that it just happened last week, but they're interested in trying ARM. I mean, I think they're... they made it work, but we have to, you know, to keep track of what happened. Okay, interesting. Let me go back to the presentation. Here I talk about the two flavors of Sysbox that we have, right? So we have what is called the community edition, which you can find on GitHub and it's free open source using Apache 2.0. And it's really meant for individual developers to play around with, you know, get the feel of the experience and even set up initial CI infrastructure, you know? And then we have the enterprise version, which is paid, you know, the price, and I didn't put a price because it's something that we're still working ourselves out after public prices. It's meant for production, right? We have the security, you know, it has some features like the exclusive UID mapping per container. It has the sharing of the inner container layers for higher efficiency, right? We test it with a higher scalability and it comes with our support and future prioritization, right? So this is what we're hoping will keep the lights on for a while and more than that, right? It will allow us to grow, right? But really where the adoption is happening right now, is mainly here. And we're very happy about that. We're seeing some adoption here and we're sort of trying to juggle development between both of these things, you know? At a high level, you know, the design looks a little bit like this. As I mentioned, SysBox is a low-level container runtime. It works below Docker and Kubernetes. In fact, it works below container deep and things like cry, right? It is sort of the lowest layer that generates the container itself. It takes the container, it takes an OCI spec for the container, right? So it is also an OCI spec base. And it's composed of three components. SysBox Run C, SysBox FS, and SysBox Manager. SysBox Run C right here, it's really the first entry point, right? It takes the container spec from the higher layers and it actually sets up the namespaces, cgroups, the chrubjl for the container, right? It creates really the container. It runs ephemeral. In other words, it sets up the container and then that process dies. SysBox Run C process dies. It's very much very similar to the OCI Run C, but with some modifications. SysBox FS and SysBox Manager are different. They're demons. They're always running. SysBox FS is the component that actually does the procFS and SysFS virtualization, right? So when we create a container with SysBox Run C, as I mentioned, we mount those mounts, the accesses are coming here. It uses a fuse-based, you know, file system user space to the accesses coming here and he understands the namespaces and he understands what he needs to do. He talks to the kernel as needed in order to set up the machine the right way. It also does the syscall trapping. Again, when SysBox Run C sets up the container, he tells the kernel, hey, any processes that are inside of the container that are accessing these syscalls, trap them and send them to SysBox FS, right? And so when the processes go and do mount, that access comes to SysBox FS. SysBox FS figures out what containers it's coming from, what do I need to do, you know, where do I need to mount, and he does them out. It's important to highlight that the fuse that diffused access is only for PROC FS, SysFS. I mean, all data of the user is not impacted in any way. So think about how this will scale. That's right. So the data path, we try to stay as much as possible away from the data paths, right? So that we're not affecting the performance of the container because that would immediately kill it because, you know, if the data path has to go to the kernel and come back to SysBox FS, then, and then go back to the kernel, and then go back to the SysBox FS, right? And then we have the SysBox manager, he's also a demon. And he does things like allocating exclusive, you know, ID mappings per container, right? He figures out if there's possibility to share inner Docker layers among containers and sets up things accordingly. So he's sort of providing services in that line, right? Any questions at this level? So you can see that this is a more involved runtime than, say, the OCI run C, right? The OCI run C is really this component, right? And that's the most involved component, by the way, but these two, you know, are the heart of SysBox right here, particularly this guy, right? It is the heart of the thing. So one question about those components is those are running as processes on the host machine, right? Yes. Yeah. It's all everything is in user space. They're running as processes in the host machine. They do require root access. All three require root level access and the host machine. They're written in GO, GO LAN, all three of them. Are there any mechanisms for hardening those components? That's something that we're working on, right? You know, hardening these components themselves is an area that we have to work on. For example, maybe setting up a backup profiles for them, making sure that they are doing all of the things that they need to do. That is still something that we're just starting with right now. We're focusing more on hardening the container itself, right? You know, with the insight, whatever is running inside of the container, the container is a trust boundary. That's sort of the way we're doing it. But we do need to work on these two because people are going to ask about that, too. Thank you. Some of the limitations, right? Because nothing is perfect. One is that it requires Linux 5.5 or more. Integration with Kubernetes is still a work in progress. In other words, we are not yet able to have Kubernetes deploy POTS with SysBox, right? And that's because Kubernetes itself, the topic of using the Linux user namespace in POTS is something that's still, you know, in the works, right? And so, and we always use the Linux user namespace. Having said that, we already have a working version with Cryo, you know, we haven't released it yet, but internally we have a working version with Cryo already. So we're there. It is not 100% OCI compatible. And that, what's happening there is that the OCI, like in the spec that the container D or Cryo would pass to SysBox to create the container. The reason it's not 100% compatible is that SysBox always uses a Linux user namespace, right? It always will do that. In other words, even if Docker says just create me a regular container, you know, without the user namespace, we set it up with the user namespace, right? Because that's the heart of the thing, right? So those little things create that incompatibility right there. Other than that, everything is pretty much compatible. This hasn't not been a problem. People don't even notice this, right? They're just doing Docker run and things are working for them. But it's certainly something that we would love to work with the OCI in order to try to see if we can work on the spec, you know, to try to accommodate some of the things that we're doing, but we haven't yet had a chance to do that. Some low-level functionality does not yet work inside of the container, right? There are things like, for example, IPvS still doesn't work inside of the container, right? IPvS finds itself without permissions, you know, IPvS admin finds itself without permissions. Sysbox needs to do some more trickery in order to get it to run. And the PROCFS and SysFS emulation will have some, there's still more, plenty more to do. For example, we want to emulate things like PROC CPU info, PROC MEM info. They should reflect, not the resources of the host, they should reflect the resources that were given to the container via the C groups, right? So if the host has, you know, 32 CPUs, but the container was only given four CPUs, PROC CPU info should only show you the four CPUs, right? You know, and people are asking for that, because once you do that, things like Kubernetes run even better inside of the container because they can start now allocating resources the right way, right? So plenty of room to grow in that space. As far as the roadmap, our number one item right now is integration with Kubernetes. Also enabling more functionality to run inside of the container, right? More and more system-level workloads to run inside of the container, improve the PROCFS and SysFS emulation, improve the security hardening, both of Sysbox itself as well as the container, even exposing those devices into the container with their proper permissions, right? So there, and I think this list falls short. There's a lot of other things that we could be doing in there, but there's a lot of potential we think for this technology. It's just nascent right now. It's really nascent, and I think there's a lot of potential for it. And finally, here we have a comparison with related tech. I won't go in detail, but enough to say that LXD Canonicals container engine is probably the closest thing to Sysbox in spirit, right? It is able to generate these VM-like containers but the big, big difference being that LXD is not compatible with Docker and Kubernetes ecosystem, right? It is its own thing, you know, where Sysbox plugs into the Docker and Kubernetes ecosystem and in a way, it takes Docker and brings it closer to the capabilities of LXD, right? And we think that that is the missing piece, right? Because we think that people are already in that, you know, when they think containers, they think Docker, they think Kubernetes, and so that's the way to go. As I mentioned, there's something called Rootless Docker, not the same thing. That's running Docker and the host without privileges is very challenging. It does not result in containers that can run Docker or Kubernetes inside. No, it just results in you not needing root access to run Docker. Cata containers for our crackers, those are all VM-based, micro-VM-based approaches to wrap containers in micro-VM. Very interesting tool, but not the same thing, right? In some cases, they may be better. Like, if you want a stronger isolation, even for your containers, those are probably approaches that give you stronger isolation. But then again, I can't require you, I have a device or I require an establishment authorization, right? Sysbox gives you stronger isolation than a regular container. It doesn't take you all the way to the isolation of a VM, right? But at least it gives you an alternative in the middle. Things like Pothman are different, right? Pothman is more of a, like, a replacement for Docker, right? Sysbox runs underneath Pothman. It would integrate with Pothman. There's also something called G-Vice, which is a runtime, a container runtime. But it has the purpose of securing the container by sort of restricting the syscoes that, you know, trapping and restricting some of the syscoes that it does towards the kernel. Again, it doesn't result in containers that can run things like Docker and Kubernetes inside. There's also a tool called Footloose from, I think, how we've worked. It uses Docker to run VM-like containers, but it requires privileged containers and the kind I talked about, right? It also does a Kubernetes Docker, but also privileged containers and complex entry points. And here we have a bunch of resources, you know, our website, the GitHub site for Sysbox itself, you know, and some other media that we've managed to get coverage on. And some emails I'm asking in my videos. And that's our contact information. And thank you so very much for the opportunity, you know, on CNCF 2%. Thank you. Yeah, thank you. It was a great presentation. One question that I have is, all right, so for the open source components, are you planning to donate this to a foundation like a CNCF or have there been conversations about that or not yet? No, not yet. At this point, it's something that has crossed our mind, but we haven't given it a serious thought just because we haven't had the time. You know, we're sort of so focused right now in keeping the company, you know, growing the company, right? It's a very young company. We want to keep it alive in the challenging years. So we've just been focusing on getting the functionality that needs to be there and getting the customers without sort of any red tape or anything like that, just coding away. Having said that, people have already asked us, hey, you know, are you planning to go into the CNCF? That gives them... So that's something that has crossed our minds. Certainly something that we would consider, you know, we would need to balance that out to see if also from a business perspective, how we can arrange that so that we can have Cisco be part of an organization, they say like CNCF or OCI even. And at the same time, you know, make sure that we are able to give the lights on and have some avenues of revenue that would allow us to create a nice relationship between open source and the business. Correct. Yeah. Yeah. So you mentioned that you're more like an open core model, like it's a meaning like the open source version is actually freely available. And I guess you mentioned Apache license, right? Yes. What would be the non... the components that are not part of that open source? No, you know, basically all of the components that I showed in the design, you know, and all of the basic functionalities open source, right? Because that's what you need in order to get a basic thing working. What is not part of the open source is functionalities or features that are meant more for enterprise, you know, and those are rely around even stronger security, efficiency and scalability. On the security, for example, if you use the Sysbox open source version, you know, all containers always use the Linux user namespace, but they all get the exact same ID mapping to the host, right? So container to host isolation is strong, but cross container isolation not as strong. If you use the enterprise version, now it gives you the exclusive mappings per container, right? So that's a small feature. On the efficiency side, if you use the free version and you have many of these Sysbox containers and you deploy inner Docker images inside, there's no sharing between the layers of them, right? You end up paying for the price of host storage. You know, if you use the enterprise site, then it will do the sharing and reduce the host storage, right? It'll be faster. So the criteria that we're using at a high level is features that are meant for developers that allow developers to benefit, play around with this, you know, even start using in their CI initially or whatever, they're going to go on the open source. They help us generate adoption. They give us great feedback. Features that we think are more for, okay, production-ready use, enterprise-ready use, hardened security, those we typically, right now, are reserving for the enterprise version, right? And then as we grow, some things in the enterprise may end up eventually being open source into the version, right? I don't think we'll ever go the other direction, right? Once it's open source, that's it. But that's sort of what we have in mind right now. Got it. Thank you. Okay. So, yeah, so thanks. Thank you very much. Let me ask you a question. Is this something that will be interesting for CNCF, either to adopt asset technology or at least to help us generate more adoption of it? What are your thoughts on that, Ricardo? Yeah. So, as far as projects, the CNCF has different levels, right? So they have the sandbox level, then they have the incubation level, and then they have the graduation. So some of the more popular projects are already being graduated. I mean, the main one being Kubernetes, right? So there are certain criteria for each one of these levels. Typically, most projects actually join in the sandbox level, and that's more like a playground type of level where people can just see the project, how it works, and it's just not necessarily very mature. But then they get some of the exposure from the CNCF, you know, as far as like being mentioned, and maybe a KubeCon and it's shown on the website. So some of those things, right? So a lot of the projects, there are a lot more projects in the sandbox level. And then from there, they start maturing and if they have like more adoption and more use cases, then what they can do is apply for incubation and then incubation stage. There's some due diligence, you know, where a member of the TOC reaches out to like end users and members of the community to find out details about the adoption and how the project is doing. And eventually there's a vote, you know, whether to include that in the incubation stage. And that actually gets even more exposure, right? So there's like, I think in incubation there, there's like sessions at KubeCon, specifically for projects in incubation. And I think there's some other things like listed on the GitHub page over the CNCF. But yeah, in general, you know, you go through these levels, right? And then the higher you go, it means that you're going to get more exposure and the projects will be more mature. But yeah, I mean, you can take a look at the CNCF page and you can see the projects there and then you can see maybe the projects that are in the different stages and see how that fits in with Sysbox. Right, got it. Yeah, I guess, so you would be, I guess it would be on us, right? To say, hey, this is a project that we probably would like to see if the NCF is willing to... Yeah. ...to talk to us, right? Yeah. And reach out to you, is that how it would work? Yeah. So if you wanted to do Sysbox, there's a link on the SIG runtime page, the Google Doc. And there's a spreadsheet where you can post your project there and then it can be voted on whether to be accepted into Sysbox. And I think it's just not that... It's a very straightforward process. Sysbox is the bar for being into Sysbox. It's not very high. So if you put it there, then I think next month there's a vote on... Or every month there's a vote on whether to include there. And if it passes some certain criteria, I think it needs to have an owner's file in the GitHub repo. It needs to have maintainer information. And it's actually all those requirements are listed too on the NCF GitHub TOC repository. And I can send you the pointers. If you're interested, if you can't find that information, I can afford that to you in... Yeah. And if generally if all those requirements are there, then it gets accepted into the Sysbox. And one of the reservations we had when someone talked about the NCF or some of the other organizations who said, well, in our minds, right, we're thinking, okay, we go there, we probably gonna get good adoption immediately, right, or more aside. But I guess we're a little bit afraid of losing control. And by that I mean, in particular, the control of the features that we're going to, the free version versus the enterprise version. But that is sort of the key line for us from the business perspective to be able to deal with. And that's, what are any thoughts on that? Yeah, so my thoughts are that you, whatever you think it's going to be open, you can donate it to the NCF. But if you know of something that is not going to be open, just decouple that part. Because once you donate to the NCF, I mean, it's already going to be public and open source, right? So whatever you would want to be there, so don't make it part of the GitHub repository. Sure, sure. Or a private repository or something, and then have your own license for that. But anything that is public and with respect to maybe that specific GitHub repository, or it could be a group of GitHub repositories too, but with respect to what you're actually donating to the NCF, then that, yeah, make sure that that's never going to be... But would there be a risk of, let's say, you donate that to the NCF, the open source part, to the proprietary source, right? And you don't donate that part. Is there a risk where, as time goes on, the NCF adopters or developers are saying, hey, you know, I want this functionality that is in the enterprise version. You need it in the open source version and sort of start cannibalizing on the enterprise version. And then it puts up on a sort of defensive pudding on the enterprise version, because then really... Yeah, yeah. It's always a risk, I imagine, with open source. Yeah, there's a little bit of a... Yeah, I think there's a little bit of a risk there, but... Yeah, I... Yeah, and it's just a matter of, like, what the organization can do to keep its value over the long term, right? Right. You mentioned that in the beginning, some components are at a price now, and then later they could be, you know, open source. That's right. But if you do that, you... The assumption is that you're already working on some other... Correct, correct. Something else that provides the value for your organization, right? That's right, because we see value not just, I mean, the runtime is the piece that we're working on right now, but we do think it opens up a bunch of new use cases for containers. And as a result of that, there's going to be opportunity around those end-use cases, also, right? Not supposedly as a runtime itself, right? So the runtime can be a catalyst to a bunch of new use cases. So there may be a value or two, right? But we're not there yet, and we don't know where we will be there, right? We'll see. Yeah, exactly, exactly. Yeah, so... Yeah, and... Yeah, so my recommendation is if you decide to do it, then make sure that those components, you know, those are going to be open source forever, right? Those components, right? And then, I mean, I personally think it's great that you have also the other features, right? Because that actually helps you sustain yourself as a company and also helps you sustain the project, right? Because... Yeah. So it's... Yeah. It works. Is there a bigger forum on the CNCF that we could present? We appreciate that's an issue of invitation, but is there another forum, maybe a which we could also present within the CNCF, even without being part of the CNCF, where we could say, hey, guys, this is what we're up to with a bigger audience? Mm-hmm. I think this is it. I mean, the next step is... The next step will be, yeah, the sandbox application. I see. Yeah. Yeah. And I mean, the next steps will be cube con sessions or something, like you... Right. You know, submit some of these sessions. There could be also some work that... Yeah. That I've seen the CNCF has its webinars. And you can maybe schedule a webinar with them and work with CNCF. I see. I see. To get that more exposure, additionally, from this, right? From this presentation. Yeah. Yeah. That would actually be a problem. I'll try to... If you know who I should contact, please let me know, otherwise I'll go ahead. Yeah. I can ask you and I can... Yeah. Provide that information for you too. Yeah. That would be a good... Probably a first step for us, I think, and then eventually considering also the sandbox thing or even just joining CNCF. We haven't even joined just because, you know, we're trying to be careful with the way we spend. Yeah. I mean, you can join the foundation as a... Right. A certain level, like they have different levels, like silver, bronze and... Right. Yeah. That at least puts something to the ecosystem gives us some even more exposure. Yeah. That might actually be a requirement for a webinar, but I'm not really sure. Right. Okay. I don't work in the CNCF. I mean, this is work. Yes. But what I do is voluntary. So, but, but, you know, you can talk to the CNCF staff. If I may have one more question. What are your personal thoughts on the technology that we are developing? I mean, what do you see potential for it? I know it sounded like it's interesting, but do you see real potential for it from both a real world perspective? Yeah. I think that from the use cases that you mentioned, like CSCV, where people want to have more isolation and more density than something like a VM. And I personally, for example, we're using GitHub actions, right? So, and then we run in these and say Kubernetes clusters right now, but then in some cases, a lot of folks want to run this in more isolated environments. Then they, then they, this will be a use, a good use case. But, you know, on the other side, I also have to say that there are a lot of different technologies that you mentioned, some of them on your slides, like G-Visor, so it sounds like the, there's a lot of different ones and there are a lot of different choices. So, I mean, that could be a challenge. Yeah. Yeah. Certainly. Certainly. Yeah. There's a lot of growth in runtimes. Oh, you know, a lot of runtimes are growing up. Yeah. And then there's a lot of confusion too, probably. And because this technology is, I guess it's, you need to understand all the details. And then, not everyone who, who is a decision maker is actually, very technical or understands all the details. So, so there, there could be a lot of confusion, right? Or why do I need to use G-Visor? Or why do I need to use this bot? Yeah. So it's also, there's also like a lot of work in terms of education. And yeah. Yes. Yes. Yeah. We, we, we have put a lot of effort into our documentations on the web, on our blogs, and we have it for itself to try to quell some of that. Thank you. We don't mean to take a lot more time and running any questions you have for Ricardo. Sorry. Awesome. No, thanks for the opportunity again. It's a quick one. Yeah. Yeah. If you have any follow up questions, I'm available on Slack. Okay. Yeah. Thank you so much for the invitation. And we'll be keeping in touch. Well, if you have again a contact on the webinar, please send it to us. Otherwise we'll go ahead and try to find our folks. See if there's an opportunity there for us. Awesome. Awesome. Right. All the best. Thank you so much for everyone to join us. Thank you. Bye.