 So welcome everyone, and thanks for being here. I think there was a lot of competition and other very interesting thoughts right now, but really appreciate it. We decided to come here. We will talk again after an amazing talk about containers, again, about containers in privacy, hot topic of the day. Because the previous talk was already about what is a container, I will be very fast on that. I also would like to be filled with you. I will talk about kind of things. What is part and what we reach so far? And then I have a couple of provocative thoughts about how to move forward and what are, in my opinion, some non-technical obstacles that we are facing to reach the next point. Who am I? My name is Luca Pizzamiglio, complicated last name. I am previously port committer since 2017. Again, reach out to me for any additional questions. This is my email address. Working at E4, I don't know, 25 years now. But yeah, what is a container? Do you have any easy, I mean, what is, in your opinion, a container? Good. I mean, what is a container? If I ask you, OK, someone asking your partner or whatever. I heard container. What is it? Secure sandbox. Hopefully, secure sandbox. Secure sandbox. Interesting. Provide my solution by the network or allow it to be done in some amount of time. An isolated environment to run the stuff. OK. Yep. Sandbox. I'm not asking property. That's really, what is it? I mean, OK, it's self-contained, but it is an isolated environment. If I ask you ahead, what is it? Yeah, it's a subsistence where you can do your stuff, get rid of it, that sort of stuff. Most of not, for example. You can do the left and the right. That sort of sandbox. That sort of sandbox. The physical you can. It doesn't have to get done. You can't come to other. It can actually mix with other containers on the shelf. That's not a problem. OK. It doesn't have to be a sandbox, but it has to have a specific purpose. Well, the purpose is that you should be able to get rid of it. So really, something you can have multiple of them and get rid of them. So the first, I mean, very different definitions. My problem is there is no standard definition for a container. Nobody define a container. It's not like posits with processes. There is no very nice definitions. Container is a naive, not naive. Not well-defined concept. Everyone tends to see container on what they work on. Here is the community. A lot of focus on the runtime part of the containers. If you read the definition in the Docker website, the thing that the container is a way to distribute applications with a lot of focus on application is that they even see this as a kind of a replacement for package management. It was today in the talk in the keynote. It said, oh, Docker enabled what? Oh, I'm a developer. I have a dependency as a subsystem. I need, in my local environment, this authentication application, whatever it is nowadays with all those microservices, not just a lot of the Docker container. It runs. And the authentication application that my company is there. So I installed, double-level quote, an application to a container. That is somehow what Docker somehow put in mind. And the focus was developer at first. So containers somehow created a bridge between developers and operations as, OK, I don't give you the source quote. You build it and you deploy it. I give you an image with my application and you deploy it. It's already built, it's already there. It just somehow moved the focus. So a container might be seen as a way to distribute an application and has two main factors, an image and a runtime. It's not just a runtime. A lot of the answers were focused on a runtime because in our community there is a lot of focus on the runtime. And also the cool stuff is in the runtime. All security, isolation, properties, and nobody has focused on the image. That actually is the first thing that you have to do to distribute your application, your code. Any additional thoughts about it? I mean, there is more or less an acceptable definition. Zoom a little bit out from the typical runtime part. And that is also why the OCI specification actually contains two specifications. One from the runtime and one for the image. You cannot live without them. So there are two basically. You can see, in my opinion, the best parallel is the image is the program. The runtime is the application, is the process. So if you see container as one single application, I think the analogy is between process and program. Container basically means both. So there is no precise nomenclature, but the container image is your program. And the container runtime is the process. Just to give you some idea. What is part? So back in the day, I saw people using Docker. I was really jealous. So easy to, oh, I don't have to download different. We're probably with engineers, different versions. I cannot install them for the different version of engineers. It's only the package that is only one. Oh, but you can use Docker. No, I cannot. And basically, it allows you to do many nice things, and it's container work. And there is nothing of this kind of support for FreeBSD. So I'll just say that FreeBSD has a lot of cool technologies in it. J is the first one. What is missing to FreeBSD to have a container alike framework? We have decided this project as educational project. And yeah, to prove that FreeBSD was container ready somehow. In some ways, the choice was to use transcripts only, because it means that I don't have to write additional complicated features. Everything is already there. So I just use the utilities out there as a way to prove that everything is already available. Started in November of 2017, and somehow is still progressing, still living in a way. If you ask yourself what is part, basically part is heavily based on jails. I'm focusing only on the, at the beginning was a lot of confusion, because it's a naive approach. So we just say, oh, we can have an easy jail kind of approach. We can have a single data set. I'm only focusing on the container-like framework, meaning that part is a unique, let's say, ZFS data set with everything that you need, base, packages, application, configuration. Everything that you need is basically part of the image. To create an image in a way that way, just add a support for scripts, usually what you need. And the problem, as I said, oh, there are many others framework back in the day that were already managing jails. All of them didn't support images. So we say, OK, what is the easiest way to create an image? Well, you have one ZFS data set, take a snapshot, then compress an image. And if you copy that file over, well, you reverse it. You just extract it, do the best receive, and then you can clone it. And then you have the image moved back and forth. So it was very working solution. So still using that. The focus was also on a couple of different areas of the typical jails usage. The one was to have not persistent jails. For instance, in IOCH, back in the day, I remember talking to developers, I said, no, we only support persistent jails. Because of the problem we already saw, if you read Joel's JSTOP, is the command that is executing pre and post hooks and is doing all the cleanup operations. Jails is a structure inside the kernel. If the jails disappear, there is no way to hook outside and run post script. You have to run it directly on. Of course, we only want to run a process in each jail, not back one in the start script, and not the jail to run through the position of one. So the question is about not using RSC, RSC basically as something that was the other area of focus. So we wanted to have, we currently, a lot of them, a lot of those frameworks, focus on using jails as light virtual machines. So we have a subsystem running its own Chrome job, running its own Chrome D, Cyslope, and so on. So basically replicating a typical food system, or at least never done, but there's a system, is not an application. The idea was to, OK, what if you only want to run one process, not many? And there was kind of not too much there. So the RSC, in frame SD, is jail aware. So when you have your minutes there, it's running few services. The services are aware that you are in a jail or not. We want to skip that entirely. Just run, OK, we want to run NGINX, not RSC with NGINX start. We just want to strip down everything and focus only on one application. The reason why is container best practices says a container contains one process only, not multiple process. It's a best practice. You can do it, but usually there's a source of pain. If you imagine, for instance, if you have, this is the concept of pod in Kubernetes. Pod is multiple containers. And you have containers running your application, but then you have another, for instance, running an exporter and another do anything else. But you have multiple containers sharing something in between, and that's container false. But you have several other containers. If one container's live, you will need to restart everything. You only restart the single container that died, and everything will start at its rooter. If the application sells for a sub-process, then not one works, right? So if the application has a sub-process, then you will have multiple containers. So multi-process application usually are a very bad idea in containers. The container idea is to exploit horizontal auto-scaling, not vertical auto-scaling. So it means that you have a, if you have a performance issue, you scale horizontally the same stuff, but you don't make the single unit more performant. So basically you want to have this so-called horizontal scaling, and you put just in parallel multiple stuff, if you can. Would you also apply this to forking demons like Pothgrass? Forking demons? No, but it was a thing. Yes. We'll talk about that later, because exactly with Nomad, this is a problem. You shouldn't fork anything. You should keep the share where you're running it. Yeah, no fork. I have a bit of runtime, so basically the choice is that, obviously, we used everything that was available in FreeBSD, so JAILs. Not only, there was, I think, a KubeCon, where basically there was some sort of the history of containers, and they declared that JAILs was the first container idea that came out to the world. So actually there was some sort of, oh, JAILs is obviously the natural way to implement containers, but also was somehow the first idea of having this kind of containerized environment. So using JAIL was obviously a no-brainer. So I think that there was a lot of other stuff, like RCTL. We want to have resource limitation. We use VNet for network. Talk about network a bit more later. Bridges, PF, everything that is basically there. Open-ended choices. I remember the first issue that I got in Pothgrass was, oh, can you support a PFW? I mean, that's a welcome, but it's a lot of work. So I just decided to use PF, but it's just a choice because I knew how PF works. But as usual, that's a welcome. So if you want to support other firewalls, just do. Network, so it's a lot of opinionable and bad choices when it comes to network. I'm not very strong in network. So I think basically what it was working. And we had basically four different network setups. Everything that JAIL was providing was just a no-brainer to make it available outside. So he narrowed it, he narrowed the stack of the machine who typically is good if you have a process that needs to fetch from outside, not to expose network services. Alias is the other typical way. You just put an alias on the network card and then you attach it to a JAIL. And then we have a couple of things using the bridge. The bridge is a way to create an internal network that is completely internal and virtual. And then completed detached. And the idea is to use PF to provide connectivity to the outside a little bit more later. Then if you have a lot of JAILs, they share the same bridge. That could be some performance issues. So we also added the ability to have a private bridge. That means, okay, you can create smaller, basically bridges where you have fewer JAILs that communicate to each other to just reduce potential bottleneck on the bridge. The problem was to decide which IP every machine had that was some sort of static DHCP kind of. So that's why I developed a small application just to manage this IP address space there to... Sorry. No problem. So that was one of the few addition that we had to do, but it's just to manage the IP address space. It's not really needed person, but yeah, something that was somehow needed. The latest addition is, one of the latest addition is support of multiple IP stacks. It's easy with JAILs to have IPv6. It's less easier to have an environment where you can try IPv6 addresses that is not usually so easy. But the idea was to have IPv4, IPv6, and dual stack. With the host machine, obviously supporting IPv6, you can give IPv6 to your JAILs. You cannot have the other way around for obviously. And yeah, few choices. So as I said before, the IPv4 implementation, basically you have a bridge where all JAILs lives. You just put the vehicle gateway address on the bridge, and then you just configure your PF to do the nut for the album traffic, and then you add dynamically redirect rules for the inbound traffic. So if your service is proposing a network service, then you can basically create a redirection rule to use the network interface of the host to connect your services there. So you have the redirection there. And the IPv4 is fully isolated. You define basically which IP address to use, and then you just use everything, and this is a network that is not visible at all from outside. The idea there was to have some sort of docker-like experience all around on my laptop. I don't know in which network I'm going to attach. So managing the addresses of everything all the time was kind of, you cannot do it, complicated. So I want to basically remove from the use of this burden and say, okay, we just give me an range that is never conflicting. The default is basically 10.192. So this is the default, but you can change it. And then all the machines live in this virtual network that hopefully is not conflicting with anywhere else where you are. And fully-assisted, fully-isolated works pretty well and automatically managed by parts where you don't have to know P.F. how it works and it was just fine. On IPv6, I didn't know how to proceed. So I just asked people, and they said, come on, on IPv6 you don't do net entry direction. I mean, it's so IPv4, IPv6 is there to have all those kind of things. So you don't do that. You just attach directly your jails to the network. This is how you do it on IPv6. Cool. Went through this idea fully. So on IPv6 you have a different bridge only for IPv6 because what you need to say, okay, you take the network card, you just put it in the bridge and then your jails will just get the address from the network with the RT-SOL. So basically just get some isolated request and then set configure. You don't have to do anything. Several problems with this approach. The network card goes in promiscuous mode because then every jail has, using VNet, they have their own MAC address. With VLAN, if you run on a Wi-Fi, well you only receive the traffic for your MAC address. So this doesn't work on your laptop. It works if you have a physical network card with IPv6 and you attach them, it works just fine. But it's run basically promiscuous mode, has to basically all the traffic just to capture the traffic that is defined for the jails. Why would that be so much of a problem if you have a switched network? And that's the overall, it can be taken on the top of the back switch. It's not a problem. As I said before, I'm very bad with network compared to the average and the NBSD conference. So this is a never-of-improvement. We'll talk a bit more later because also this is not a very good idea. Pitches are welcome. Be free to have any, I mean, I'm happy to really, it's really probable. But what we did later, so the idea was to develop part only to imitate Docker, so to focus only on the container kind of features and not to care about the orchestration. But then a certain point of the way, but orchestration is the way to go nowadays. Back in the days, we look at Nomad. Nomad is an open source container orchestrator. It's a sort of competitor to Kubernetes, developed by Ashikov. The biggest advantage of Nomad is that Nomad is from the beginning designed to run on different operating systems. Ashikov has a lot of enterprise customers. They run on Solaris, on other stuff. So there was already a Nomad part on Fribisly, so I didn't have to do anything. Oh, Nomad is there. So basically you can run Nomad on Fribisly already. Kubernetes, forget about it. Doesn't compile. I mean, it's a nightmare. But for Nomad, Ashikov care about having Nomad running on multiple operating systems. So being not one operating system centric is one of their things. So it's already somehow designed to be more flexible. What you have to do, if you have a different container type, you just create a driver. You basically expose the features that Nomad needs to schedule or to start basically a container. And that's it. So I'm a color colleague, a student, just vote a driver. Vote a driver for Nomad to interact with POT. And yeah, Kubernetes is a like experience. What it means, very high level. I don't want to spend too much time on that, but basically you define some sort of job that is Nomad names, you specify a kind of service, other details like where the image is and so on. And for instance, you can say, oh, I want two instances of that service. What happened is those in the middle are different Nomad worker kind of specifically if you choose, oh, this is a POT thing. So I want to, you need those as to have the driver to speak to. So let's say this is some sort of the control plane where you decide what to run and so on. And those are the worker. And what you can have, you know, multiple nodes, then you can have multiple instances of your stuff. I put the IP addresses just to specify the IPv4 on the bridge. So basically this is the internal virtual network, the 192, blah, blah, blah. And they can have all of them the same IP, who cares? Because there is PX here to do all this translation. What it matters is the IP of the machine because the service is not reachable from here, but it's reachable from here. Nomad is an orchestrator. It doesn't manage the service catalog. It's console on the other side is basically another service that just managed the service catalog. What it means? It means that, oh, I have two instances. Nomad decides where the instances are running, which port to expose and so on. And console keep track of those. So if you want to reach Fubar, that is here, you use this IP with this port. And here there is part that is doing, is injecting those PF rules on the fly and do all the magic to make the Fubar service running in a jail exposed outside. This is the cloud wave of services. Additionally, you can have ingress. That is basically some sort of look balancer to dynamically understand how to reach a service. You have only one single point of entry. And you say, oh, for instance, in mini port there is some sort of reconfigured system. Traffic is configured to have one content, depending on the host header, decide, oh, if your host header is Fubar, it's looking for the service Fubar. Traffic is picking with console to see who is implementing the service Fubar and then is directing accordingly. So the list of the backend is automatically updated all the time. When you do a new deployment and the partner number changes and so on, traffic will update the list of backends automatically. And this is the Kubernetes alike environment. And it works. We have it, upper running. The problem is you need to know more or less how normal that console were all those kinds of services to be proficient on that. Because it's a different world and it can be complicated. But when those with images, we needed a repo for the image. The image was very naive. I didn't want to write a register because I say security issues over everywhere. Problem is, oh, you download random stuff from the internet and you run on your system. Bad idea, usually. So I didn't want to have this burden. You know, it's an additional thing, you know, authentication, too much. I'm not an expert. I will do disasters there. I'm really doing bad stuff here. So don't make my position worse. But definitely can I say, well, I can do this. Just as a proof of concept. There is this claimers everywhere. Use it at your own risk, whatever. But at least there is something just to play with. So there is a repository where you can create your flavors. And then this system basically will just create images for you for different previous diversions and so on. There are also additional for every image that are already predefined, command line how to use and download this image. So it's some sort of docker hop for images that was the ambitious goal. Still with a heavy disclaimer at your own risk. At the end, the registry URL is just a web server, downloading a file. Because the image, as we said before, only contains the archive. We put some attributes in the name because naive implementation. So, oh, which previous division is it? Oh, it is underscore 12, not a great implementation for sure, not OCI. But there was something to basically move on and improve the concept. So that was an additional work done by Stefan. And how far this naive implementation reached. One of the problems when you run containers in production is, oh, this container is misbehaving. I need the developers need to SSH into that. So you don't SSH in containers. What you can do usually, oh, I need a shell in that container that is basically equivalent. So in a jail, what you have to do? Oh, you have to go to the developer. The developer needs to access the machine, needs to run a shell with JXAC inside the jail. A lot of permission in between. And you have to be rude to run JXAC. So similar issues with, in general, container world. Now I provide a UI and there is very nice addition. This is the last addition, so how far we reached. You can run directly from the web UI, a command in the path directly. What it means, there is a nice screenshot because the demo didn't work. So the backup was having a screenshot. What it means is this is the web UI. There is somewhere a jail is running on some machine. But from the control plane, basically here is, oh, nomad, unlock, XAC, whatever. And what do you want to exact, SSH? And directly from the control plane, you reach the jails and you have an SSH running there. That is the latest achievement. So how far, in my opinion, also a naive implementation can go? Because the interface between the nomad for driver and paths is a bunch of shell scripts. Running shell scripts in production can make me nervous. So that's why I don't have any implemented. But, as I said, Grembo, Grembo is an other path committer. He has a professional, I mean, instance of this system running on .itwork. So basically, there is something in production. Even though he has a different network setup, so it's a patch version of path, still waiting to understand how we did things. He's better than me. So we need to figure out a way to improve network, but for sure his contribution was great because it moved from a proof of concept to something, I would say, semi-production ready. And basically, he needed these kinds of features implemented in paths and then in the driver deployed working. Moving to a little bit more, I would say less technical thing. Why paths reached this level was not just me. It's a community effort. As the one brought to the driver, Grembo, Aka, Iyer, run a class in a professional environment. So it just identified a lot of corner cases. We're talking about Postgres, huge amount of issues. The Postgres shared memory is not clean enough. I mean, many corner cases has to be found and addressed. He has a very, also additional use case for batch jobs. When a batch of die, you cannot go, we need to know if we're successful or not. So we had to add comments to know what was the state of the job when died because it's not an effort service, it's a different thing. Step-on, step-in, and create industry with good images. And then also very good because you reach out every quarter and say, hey, you should write something, otherwise I will just forget. So it's really a community effort. Everyone just contributed in their corner and we could basically move on. Move on with what? Load images. So basically, oh, we use ZFS, so we just create an air shield images. We are still imitating those CI somehow, but redefine everything is a lot of work. So we found a way to do it. DNS configuration, if you download, we have the DNS configuration, the rest of the content is part of the container. Well, adding comments basically would change DNS configuration on the fly because you don't know where your machine is going to run. So you have to basically strip away some of the configuration of the image when it was created to make it more agnostic. We decided just to overwrite things on the fly when the things are prepared. But still, a lot of work. Flavors at the beginning was less flexible, so now you can just isolate your flavors somewhere and then create things directly from there. I was talking about, yeah, positive shared memory at this garbage collector because, oh, the J is not dying. Why? There is nothing there. Oh, no, there is a positive shared memory. And this is a nice thing, it's in current. So those use cases when it comes out and you speak with people and they are willing to fix it. So now in current, Js are able to clean up shared memory, positive shared memory. So it's a nice two ways collaboration. So if you have use cases, people reach out and things get better. That is the example, the ZFAS encrypted data set. So basically everything that is available from the operating system, it just matter of exposing above. You don't have to do, I mean, it's already done. The operating system is developing. You have to just bring those features up and make them available. This is the nightmare. Starting and stopping a jail is not an atomic operation. Then you can, if you are, especially if you don't control it because nobody's controlling what is starting and not, it can start the same Js twice. Just few millisecond, microsecond, I mean, the operation to start the jail with POP specifically, oh, you have to modify system, prepare the file system, prepare the network stuff and then finally start the jail. So there are a lot of operations in between. It's not atomic. And because it's a shared script, it just do stuff on the way. It just check, oh, the jail is existing? No, yeah, but there is another process that is starting the jail, doing everything already. And then you have race condition like hell. And fixing in shell, so the limitation of the initial choices, not very great. And then some update also on the normal car driver has said bad jobs, periodic bad jobs, another bad thing. Ability to send signals, Xs and so on, just because the idea was, oh, you need it. You implement it here and then you expose it and then no one can use it. So still few issues. The redirection, if you use the host IP directly from the host and you do the redirection, it just doesn't work because when you are on a machine, we just saw before, if you are here, 10.003, if you ping 10.003, well, it's localhost because you don't go to the physical network. Somehow the redirection, if you try to reach FUBAR using this address, basically the one in console, it's not working. The traffic stops when it goes back from the jail. Get lost in between because it's localhost of the network, I don't know. There is something that doesn't work. We have a workaround that is horrible, really bad. And yeah, Michael said, oh, I have this patch version that uses reflect jail, but it's really, I don't know what it is. So he knows, he has a valid, we will integrate these solutions here one day. We have ability to mount stuff inside the jail. We use null FS heavily, but the typical Docker usage, oh, I just mount a file. Mount a file, if you let me use this word, we cannot do it. We have to copy it. You cannot mount a file only. You have mount point is a folder, cannot be a file. And that is somehow a limitation. Currently, oh, you just, you need a folder with one file and that is the workaround. Yes. You probably can do the same thing. Sorry? How is it going to use this file? No. Maybe, yeah, I mean, I think our workarounds to do that, it would be nice to just use the same approach for everything, let's say this way to limit the, something is possible, yep. But yeah, if you have a better idea, just write to me and I will try to use, I mean, using fuse, just add more dependencies. I'm not super, but yeah, third. Just for the, yeah, one workaround is to use, yeah. That of us. Yeah, that is the current workaround. So using a folder and put the file there and you can, the problem also, we have to always pay a lot of attention. That's in general, Dan Lange gave a good thought, a good examples. If you start to abuse sim links, then, I mean, how we copy now stuff is to, if you want to copy a file, we mount the folder in the jail, we run the copy inside the jail when everything is mounted. And the reason is, if you have sim link with a absolute path name, if you are in the host or you are in the jail, those things are resolved differently. So to avoid confusion, we just, oh, you want to copy a file? We mount all the file system, so you run the copy in the correct environment and then you go back. Yeah, that is another thing. Yeah, we want to be able to do it without running the jail. You don't want to execute the jail just to copy the file in it. So we have separated you, we have basically a way to mount, to prepare the file system only, just to copy the file and goes back. This is another topic, it's not just for us. I think there is this nasty race conditions that many has been addressed. I mean, it's not that bad as before, it was really addressed. The problem with VNet and jails is when you stop the jail and then you immediately destroy the VNet, the E pair. So basically, E pair B needs to go out and in that moment, if you destroy the jail, then the jail is removed. There is sometimes some red condition that's happened and it just explode. Easy solution, oh, we have a slip. You destroy the jail, you stop the jail, you wait a little bit and then you destroy. I know it's not nice, but I mean, it shares creep, again. But this, I mean, it's not easy stuff. One big difference compared to the, I say the Linux world is about resource limitation. There's the only, I would say, big differences about memory limitation. RCTL try to limit the memory use but it's not as nasty as C-groups. So it doesn't kill the process if you use more memory. So it just try to limit the memory use. C-groups is a typical thing. Oh, if you have a leak, just kill the process so it doesn't disturb anyone else. Yes. But you have to be careful because if you limit the resident memory size too much, you can push the process to stop swapping, really. Yeah, last time I tried, it wasn't swapping. It was just going above the limits and it wasn't stopping, it was swapping at all. But it was three years ago, so things could have been changed. I won't just do, I think we are running, yeah, five minutes. So I need to rush a little bit. As said, initial assumption was a lot of source of pain. So ideally, we would like to have OCI support, life cycle support that is the same thing, the same struggle. Jays are an internal data structure. So to have an interaction with the user space, you need some sort of supervisor that can keep track of the status of the jail to be able to automate the different operations. Because the kernel cannot, hey, run this post-stop hook and not do it. So you need something outside to listen, basically, to the events and have those kind of managing the life cycle support. With a Nomad pod driver, oh, we've just fixed it. The driver of the Nomad is already taking care of the container life cycle. So I just said, yeah, can you run pod stop there every time, so if it doesn't exist anymore, just run spot, stop it twice. We make stop intelligent enough to not do too much, but at least is able to perform all the cleanup operation, unmounting the file system, things like that. So that was, but if you don't use the Nomad pod driver, you don't clean it up. Another thing you can do with a supervisor, ideally, the supervisor has root credentials. So the user can start a container just picking with a supervisor and you allow basically users to run jails without the needs to be root. If it's a good idea or not, I don't know, but with a supervisor, you can do it. I said shell is a lot of fun until it's really not. Just skip this. Last thing is we need to redesign many of the stuff that we are doing right now because the naive approach of stuff reached basically, in my opinion, its limit. We can do, we can still do a lot of improvements, but they are not game changer. They are not able to now be widely adopt because there are some inerring issues with the initial thought because the goal was completely different. So if you want to go from a proof of concept to a product, we have to address several stuff. And also it seems that the previously community would like to have a container implementation, but it is a community effort and not be, I mean, why there are 30 jails framework and many of them died after one or two years. If it's only me on Sunday on my couch doing stuff after a while, you just give up because life moved on. So it's a community effort. Not just because you cannot rely on one person, but there are so many subsystems. I'm not a jail expert. I'm not a ZFS expert. I'm not an ethics expert. Manage to do something, but still, if you want to reach a certain point, you need people with better knowledge, with really experience on every subsystem to give you the right hints. I mean, already now we're talking about network, how to map a system. Yeah, just come out. Oh, you can do that. You can do that. You can do that. You can do many things, but I don't know everything. And that is why it's a community effort. And the second reason also is different ways to use a stress container. I made the analogy before, program and process. It's really like that. So you can use a process in four gazillion ways. And you have three billion of use cases, and I cannot imagine all of them. So you need a community also to enrich the use cases and make these more reliable. Opinion added? But yeah, we have to get rid of Shell and use some programming language that is called programming language, not Shevskv. That is not a programming language. Provocative. Go seems a natural choice, but I'm Rust myself. So just because Go has a gazillion of modules already done for containers, so you don't have to reinvent or docker on ZetaFace exists. So, oh, we can use that for images or you don't have to do it. But we don't have any specific choice. Last but not least, that is the most complicated one in any possible community. A developer can back to the first thing. Containers for developers, not for operations. Operation was a consequence. If you want to develop or to use a container, they have to be able to use it in their own laptop, in their own development environment. Here we have few people using VSC directly on laptop, but few already here. Imagine nobody wants to have a previously laptop just to run jail, so you need to provide an abstraction. There are several things, but still it's, yeah. Exactly, but they provide it. So you have, yeah, but you have to maintain it, a product for Mac to allow Mac people to use it. You have to do for Linux, but we are using BSD as a primary driver. You don't want to develop something on Mac that emulates FibrisD. That is, I would say, a source of friction that is complicated to address, because it means that you need developers of another operating system to support your stuff outside. I think this is a problem for adoption. Developers don't run FibrisD natively, period. And if you don't, Docker was successful because they put developers first, and then an operation that would figure it out. That is why it was successful, my opinion. Thanks for listening. I think we have already a lot of questions. I saw a lot of hands. I don't know if we have time. Can you take a couple of questions? Do people mind if we have the microphone after this? So I don't have to repeat it. So if you have questions, just reach the mic and shoot it. So do you have any problem with VNet, with any interface that's not E-Pair? Because my experience is that when I only use Jail with a single process, and if you assign a VNet to it, say it is a real network interface, we will never interface will disappear forever. I'll tell you, Jail dies. No, we don't have this problem. What we have, we basically to avoid using SHRC, we have a tiny RC, some sort of script that is created on the fly, depending on the configurations of the pod, to configure the network on your behalf. So basically we are injecting the configuration in this tiny script, and then this script is launching your application. But we never had any issues so far with VNet on, I mean, we had in the past, but now it seems very, very solid. So after that, it's the relocation and knowing which interface you have to destroy after the container dies, is still a hard problem because we're using Shell script. It's not actually destroyed, it's that the way you need to pass in your device to the Jail, you actually need to use an if config and a VNet device. The thing is, after the Jail die, because it's there, so you cannot use a VNet to place out from the Jail. Back to the most. Yeah, we destroy it from the host. So basically, you know, we put the B inside the Jail and we keep the A outside, basically hanging there because we have this redirection basically on the bridge, so we don't care about the if per A. But then when the Jail dies, we just use some tagging on the, and we try to understand, okay, which if per A we need to destroy and then everything is fine, so we don't care if the if per B is not there. Usually everything is fine. So that is how we, we do a lot of operations outside the Jail. We tend to do as minimum as possible inside the Jail and we prepare all the environment that is needed outside, and then we start just the process. One thing that I wanted to mention in Nomad, you cannot fork, because then Nomad want to keep the, so basically Jail start never ends. And so there is a Nomad log thing that can just grab all the standard output and standard error and just represent to the web view. So if you fork, then it thinks that, oh, the process is not there anymore. It's a typical example with engineers, for instance, if you check in Docker, they say, oh, the exact, engineers is don't go in background and you have to do the same for engineers on Nomad. You don't have to go in background, otherwise the things doesn't work. But if you run the same image on the command line, it's just still your terminal. I think our time is over. Time for lunch. So really thank you for being here.