 Okay, hi everyone. Thanks to be here. I'm Luca. It's Emilio. I, yeah, we speak about a small project that I had in mind. There is a lot of nice history behind it. First of all, who am I? I'm probably the enthusiast and port committer a couple of years right now. My job is in Trivago. Maybe someone knows. I've been told that a metago is sponsoring for them. So there are socks somewhere of Trivago. So if your feet are cold, there are free gadgets, there are socks. So yeah, I guess everyone knows what CI or CD means. Continuous integration, continuous deployment or whatever it is. Nowadays it's really, really popular. I mean, every software project has a kind of pipeline to automate builds, automate testing, whatever. There is been, that's at least what I've seen. There is really a lot of interest in having those kind of tools also for FreeBSD. Because if I'm doing the tool, I want to also, for a lot of reason, like improve portability, I want to know if my tool is also running on FreeBSD. FreeBSD is not really popular as a platform. We know that. And normally also people, okay, I have to install a virtual machines and then I have to do some more for that. And so there was a growing interest, but really a lack of support. If you're using Trevis, CI, if you're using SQL CI, whatever is out there, they only support Linux, maybe Mac, up there is for Windows. But basically there is no support whatsoever for FreeBSD. And I remember I was speaking with a colleague and I said, yeah, that could be a business idea. You can even start up to provide like a player for Windows, you can do that for FreeBSD and so on. Nice. Submit a talk, maybe we can do something and then starting an open source project, maybe we did that. After, I guess, one week or 10 days, sort of CI is provided support for FreeBSD so there is no business idea, no whatsoever. So it's gone. I was thinking, okay, I can remove the talk and I still do the talk. I still do the project, but there is no business idea there. It's gone. CRCIs, it's a Google-based, Google Cloud-based company. They provide FreeBSD support and they use their Google Cloud Engine, whatever it is, VMs basically. And I don't want to use VMs. I don't want something smarter. I want to do something for the setup and the tear down. I can use JS or things like that. So I can do it how well that can be. It should be easy. Before this support, this is an example. I will speak about Rust a lot because I'm moving from C to Rust and I love Rust as programming language so everything is around Rust here. And I discovered Rust has a strong support on FreeBSD and there is this great, it's Lipsy, so basically the wrapper around Lipsy and the question was how they automate the test of their stuff without having support of FreeBSD, but they provide support for FreeBSD so how they end up to do. And digging in the sources, you see Lipsy, CI, Docker, unknown FreeBSD Docker file is not really Docker for FreeBSD or at least if you want to run Docker, FreeBSD code inside of Docker that is running on Linux, how the hell they are doing? Yeah, they are doing QEMU. So they are running FreeBSD in QEMU in a Docker container. Yeah, exactly. So the title says ugly workaround. So yeah, it's not nice that now they are migrating to sort of CI because, okay, come on, it's native so they can do it. I mean obviously they know it's not perfect, but this is still the way they use for testing NetBSD because NetBSD has even fewer support for this kind of stuff so they are running QEMU inside of the container to run NetBSD. And I have more motivation. This was my personal motivation. I use GitHub. I know it's Microsoft, but I'm still using GitHub. And when you do a release of any utility products, whatever, you can have assets. And if you use, you can have Debian packages, you can have Windows executables whatsoever and also it's never a FreeBSD asset that you can download it. So my motivation is going from this situation to this situation, having a tar ball with FreeBSD binaries on GitHub. Now I guess with Syracuse, you can have it, but still. So I end up to build this tool, is on GitHub, of course. It's a command line tool. You tell the project where it is and will do the work for you. It's written in Rust. So if someone wants to contribute, it's written in Rust and it is there. It's still extremely limited. This is almost a pilot project, but it's already working. I mean, all components are there and it's just a matter to extend support to multiple stuff, but we'll get in the later. So currently it's only support GitHub and Rust projects. But adding another language is really easy. Adding another, I don't know, GitHub or whatever, it's slightly more complicated, but everything is doable. So please don't rest how it looks like. You just call it. You say your username on GitHub, the name of the project, and it will download it and it will do it. There is an option for the tag. If there is a release with a tag, it will upload then the tar ball with your binary. That will give a look more or less how it works. The first thing I think, okay, you want this YAML file. Typically you have .strebies, blah blah, YAML file or whatever. This is the way to do it. I want to do it as well. So you don't have to say before which version of privacy use, which version of compiler use, and so on. So it's a standard way having a YAML file. That means that to know what you have to do, you have to download the project before, because the instructions or the conversion is inside the project. So first thing to do, download the project and put it in a .strebies dataset. Pass the YAML file, so then the tool knows what you have to do. Now, I want to build something in an isolated environment, so obviously I want to use Jays, but I don't want to create Jays just to build one time and then destroy everything. So the idea is to have a catalog of images that the tool can just clone, run the build, and then destroy. So there is this image catalog, which I think I will use more later. It will be cloned. The project in the other dataset will be just attached. Now everything is ready to run. So it generates a build script, transfers the build, and then it will be teared down, so destroy the image, and then because ZFS is great, I will see exactly what that means. We just revert the ZFS dataset with the project. So if there is some pollution there, we just go back to the cleanup situation, and if I have to make multiple builds for multiple version of the language, for multiple version of privacy, you can just iterate as long as is needed. I have a nice animation that's more or less show the same. So the beginning is having your catalog, and that could be as big as you want. So multiple 3bz-12 and 3bz-11, multiple Rust version. As I said, it's Rust specific, so then what it does is it downloads the project, there is a YAML file, it's putting a dataset, then really easily we just take one image, it will be cloned. Those are ZFS dataset as well, so the cloning is instantaneous. Generate the build script inside, it will execute the build script, then the artifacts probably are inside the project. Normally best practice for building stuff is to, you know, release the code, but you use a different directory to build everything. In this case, I don't really care. When I am in this situation, now the setup is done, to tear down, I just destroy the ZFS dataset with the jail that's executed the build. Here I still have my project dataset with the artifacts, but because I did the snapshots at the beginning, I can just go back and I am back in the original situation. So even if it's polluted, modified, I don't really care because ZFS provides me this time machine, I can go back. And once I'm here, it can reiterate for the next things that you want to do or whatever. Questions so far? We're coming in three slides. This is the YAML file. I was ambitious, so I put a key of OS, so that can be for every OS, that is only previous D. For the language with the same, okay, rust, so it's the same. So you specify for which version of previous D do you want to build your thing, for which version of rust you want to build your thing. This no deploy is to avoid to have multiple artifacts added to GitHub. So you say, okay, I want only stable or I want only previous D12, so you exclude things in no deploy, so you just repeat what you want to exclude. One word about the update through, having full control of everything, it made me lazy. Normally what you do, what you see in Travis, SQLCI, whatever, you have a lot of options here because then, because you cannot manipulate the build script. So if you want to add environment variables, you have to go there. You want to update packages before to run the build. You have to specify here because you cannot control the build script that will be executed there. In my case, actually I have the build script, so there is an option. You can specify your build script instead of using the standard one. So if you need additional environment variables put in there, more packages put in there. That's what I mean lazy. I guess it's still better to put those kind of things in the yaml file and making the script more generic, but for now it's good. So what is this build script? This is a typical Jinja template. So when you see, I mean, I guess everyone knows what Jinja is. Basically, yeah, it's a template. Those things are substituted. So basically, no, no, no, it's a Python thing. It's quite common in the web. So you make basically, I don't know, a web page and then you render basically the output with multiple things. And the part is the tool, I mean, this is the script and this is how I can put information directly inside. For instance, update, okay, would be if true or if false. Those are information that come from the tool. So start from the beginning. This build script is the entry point of the jail that I'm creating. So there is no RC or bootstrap. This is the only bootstrap that there is. So I execute only the build, I say our container. So the first issue is that there is no environment. So you have to create environment first thing. Then run updates. If it's needed or wanted, then there is a specific Rust thing to build. And the tool is able to understand if you want to upload or not. So if you have a tag, so there is a release on the other side until you build a title and then there are this specific GitHub API. This is to upload the asset. It has to discover which release ID it is and so on. So it makes a lot of web stuff to read this kind of stuff. If the artifact with that name is already there, so it means you want to substitute it. You have to delete it before. But all this information are fetched directly by the tool. This is a little bit complicated, but it's already there. For instance, I did an error in the upload script, so you want to re-upload your stuff. It will do it automatically. There is an option. This is the default one, but you can take it, copy, edit whatever you want. The tool accepts an option to say, okay, use this build script instead of the other one. So you can customize the build whatever you want. For instance, to create a target, there is not only one file, but multiple files. You have to modify it because I don't have a smart way to do it yet. But yeah, that's a good part that when you are in control of everything, you can do easily multiple things or break multiple things. Images. How to create images? So I'm using pod containers that are an abstraction of jays, basically. Easy. Structuring is not really sophisticated. And what I do is, okay, I use these flavors, basically a script, terraforming, I don't know how now popular words that you can use. But you basically create one container, one image for every combination that you want. And this is the one to install Rust stable there. So basically just install everything. It's the same concept that you have in Docker or whatever. It's not really fancy. In the project, there are a folder called pod images when those scripts are provided already. So you don't have to reinvent the wheel on yourself. Yeah. And it generates the the jail for you. Take a snapshot there. So the there is a thing here. So the image then with you can log in, then in the image, you can start it and log in there. So if you want to modify on your own or for whatever reason, for instance, if you want to run the update, you want you want to update your image, you just log in, run the update, take a snapshot. And then the tool will be I use pod clone. It means to take the last snapshots available of that image to create your new build environment. So it's everything is that the first place. So cloning snapshots, all this kind of stuff is is based on that. And as you see, this is really easy. It's a script. So everyone can add your own questions so far. Yeah, I'm pretty sure I'm using tera. This is a crate from last that implements ginger. So it should be almost everything. See, it's not just, yeah, for you know, you can also have dictionaries, data structures. So you can have, you can even explode generate codes with unrolling loops or something like that. So but yeah, it should be full ginger, more or less. Other questions? Great. It's a pilot project, but what I want to do now is to make it more easy to use or first of all, extend the audience. So I want to add more languages. I would like to add more platforms in terms of GitLab or other Git provider. The logging is a disaster currently. It just generates a huge file at the end of the build. So you don't know what is going on there. So I would like to improve that a bit. But the biggest step that are needed is currently you have to prepare your build on your machine to run this kind of stuff. What would be really nice would be having these catalog images somewhere else. You download it. The tool will download it, build everything, and super fine. Kind of docker hop thing. So it's not impossible because those are such a fast snapshot. So you can just send a snapshot, create a file, put it somewhere, done. Security implications though are the things that are scaring me the most because then in the other way around, if someone temper your images, I mean, it's not nice. So it's the only really motivating. Is it the plan to work on this to adding downloading support for POT? So you basically download Jays and run on your system. Probably, I don't know, I'll do some HTTP server that has to run rockily and over there. Yeah? My assumption is that if I use GitHub to store my images, I can get banned. I mean, a compressed image is around 120 megabyte each. And the power is not really, I mean, space is not enough, but the bandwidth can be an issue, I guess. I mean, if someone has some idea, I want to go, that is, yes. So the POTS has two ways to work. What I'm doing right now is the easiest way. It means one big single dataset. The other way is to have three separated datasets. One with the base that is reused by everyone. They want the same base, the other with the packages, and the other with configuration. And that makes, I mean, it's doable. It makes everything just a little bit more complicated because then you have dependencies on dataset. Before I have to use this dataset and then I guess it's doable. I'm open to helping contribution from that area because it's something that is really needed. I mean, that will be a changer also for the world free business ecosystem when you can download Jays and run them easily. So I'm open to any, yeah, yeah, it's, but it's a lot different on what I'm doing. But I had to speak yesterday with the IOC age guy, but he had some issues or didn't show up. And at a certain point, I have to move to, yeah, probably IOC age. I mean, at the end is the key idea is you run just update scripts every time. You just send the snapshots and this should be fine. But yeah, as I said, that's the biggest point that is still missing. When this is done, so currently, for instance, how it works, you set up a new build image, you run it, tear down, and then you step to another one. So it's not to feel it's running on one machine, doesn't really scale. But when you can download images and that's open the world for orchestration to whatever it is. And I have some words with doing with Nomad. This is a kind of Kubernetes something. But the good part of Nomad is that Nomad can support multiple container technologies. So it's not Docker centered. Nomad is running on FreeBSD. So if you have some Java machines, for instance, you can orchestrate stuff with Nomad on FreeBSD, supports other drivers. So the only easy thing to do is to write a driver to orchestrate jays, basically. And that would be the moonshot, the final goal. It's not impossible, not at all. We'll see. Maybe there will be some other better technology to do that. But that would be the final step to have something really scalable. Yeah, I don't want to... It's the setup to down. The point is what I want to do at a certain point is provide also a kind of a cold migration. So if you have a jail running there, then you can stop it, take a snapshot, move the data set to another place, and rerun in another place. So the kind of features that are not impossible probably is doable. Right now, what I don't want to do is to regenerate jays all the time. So the first thing is, okay, just download the setup and start it. So it's relatively easy. Moreover, if you use just for build, you don't have a lot of network problems. Normally, you want to just do the load things and upload things. So you're not exposing any services. So in everything, the network stack is not an issue with jays. So this would be... It's really easy use case. But Percotense provides also for network services, so there is. You can run your jays inside an internal net bridge stuff. But can be a good idea. I mean, normally I use Solstack and that can be... I mean, the said brother of Ansible. So I feel it can be a good idea to orchestrate everything there. Yeah, nice. Thank you. Yeah, thank you very much. It's my talk is over. Any question, whatever? I guess my email wasn't the first slide, sorry, but Percotense. Thank you very much.