 Right then. Thank you very much. Okay, so I'm Richard Brown from the OpenSUSA project. This is actually my second talk of the day on this topic, which is kind of fun. I was really looking forward to coming to the first session in this room because it was meant to be, you know, our distribution is still relevant in the container world, but I was too busy in the container track telling them how distributions are still relevant in the container world. So this is kind of the follow-on from that of what we're doing with OpenSUSA, what we're doing with containers, why we're doing it. But, you know, at the beginning, you know, starting really from the basis level, as distributions, you know, we are here to distribute software, and it doesn't necessarily matter how. You know, people might call it an app, we might call it a package, it might be a service, it might be a container. You know, when I started, you know, software was distributed on one of these on a cassette tape. And, you know, I'm getting a bit older, and I'm realizing, you know, a heck of a lot of the assumptions that I carry with my open source stuff come from the fact that this is where I started. You know, playing with this, learning how to code basic, learning, you know, then having an IBM PC, learning how all these little bits and pieces work. But that's not the world we're living in these days. You know, everybody has, and might have started on all of these interconnected devices. If we're lucky, they might know about these things called servers. But, you know, there's an entire generation of developers these days who have just been born with this cloud thing of the internet where, you know, they can go to some web page and start doing stuff. And, you know, that's not necessarily a bad thing. They don't necessarily need to know all of that stuff going on underneath. You know, but ultimately everything's getting more interconnected. It also means everything's getting more and more complicated. So actually that stuff down in the deep plumbing actually matters even more than it ever did before. Just nobody wants to worry about it. And when you get all these nice complicated interconnected messes, you know, what's the first thing we as developers all do? We make the whole thing modular, which then causes more connections because we've made everything modular. But this isn't necessarily a new thing. You know, the word for that is containers. But, you know, this is something we were doing back in the day when we started packaging. And, you know, it isn't a trivial problem. You know, you look at, you know, open source software, all these different packages and, you know, things upstreamed you're dealing with, you know, and the kernel is releasing something every three months. And Kubernetes is releasing every three months. And SaltSnack is releasing every three to six months. And, like, the guys at Podman, Scorpion Builder, are just releasing all the goddamn time. You know, there's no release schedule. It's just something new whenever it's ready. And so, you know, as a Linux distribution, we're trying to take all of this stuff, condense it down to something that someone can actually use because we're not expecting everybody to just pull down Git repos and build them themselves and, you know, run it themselves. And it has to be, you know, coherent, consistent, and, you know, operational for the purposes it was built for. And when you start looking at the container world and the cloud world, you know, not just on that one machine in your room, you know, or even that couple of servers, you know, it needs to work at scale for large systems, for large deployments, with thousands of users. And it needs to be totally and utterly stable. But it needs to have the latest of everything because that's also what users are expecting. You know, they've got used to having new software delivered quickly with new features. Not just us Geeks at Fastem where, you know, we like all this upstream stuff, but, you know, the expectation is something isn't going to necessarily be the same as it was for 10 years. That entire service might not even exist if you're a Google Plus user. And so the way people are working with their computers is also different. You know, my Commodore 64 was my pet. My first IBM PC was a pet. I cared for it. I packaged everything on there, you know, I installed my packages on there really carefully. And there's nothing wrong with that. That use case is still valid. People are still thinking in that way and using it that way. But, you know, there's an entire generation of people who've been using nothing but iPhones and androids and, you know, having, you know, having netbooks and dealing with, you know, they don't need, they don't want to deal with this. They don't know how to deal with this. They've never been exposed to it in the same way that some of us have. And this is where sort of the pet versus the cat in the hatch you really come from. You know, just, you know, if you have a misbehaving machine, you know, throw it away, replace it with a new one. And, you know, this is what you see, you know, you see when people are talking about clouds, you know, and, you know, yeah, it's a common analogy. But when you actually extrapolate it and think about it more, you know, which is more important for the world at large? You know, which one has a bigger impact on the most people? A pet helps, you know, a few people, a family. You know, if you're cat, you know, if a bunch of, you know, if your bunch of cows get mad cow disease, you know, that has a huge impact on a huge amount of people. So the OS that is running your cattle servers is more important than the old OS that was running your little one server. And, you know, developers are users too. You know, developers don't, you know, generally speaking, some of us care because, you know, we're here in the district of everyone. But, you know, at large, fewer and fewer developers, proportionally speaking, you know, are caring about the stuff we were caring about as we would, you know, as we've been building all of this container stuff. You know, they just want their web service to work. They just want to deploy their microservice and move on and have the OS do its thing out of the way so they don't have to worry about it. You know, as always, you know, new level of abstraction, they, you know, they don't need to worry about. They shouldn't need to worry about the OS. But they want everything even faster. A lot of these themes, I was here years ago talking about in a slightly different context. Because, you know, in OpenSUSE we have tumbleweed and tumbleweed does try to address many of these issues by being sort of the traditional Linux distribution iterating incredibly quickly with good testing and good building. But it's only part of the story. It works really, really well for, you know, effectively people thinking like we think. But it doesn't really solve the problems that you see if you're doing everything with containers, with, you know, on clouds and working that way. It's just a complete mismatch. Yeah. So, you know, I've started looking at this problem and trying to figure out, you know, yeah, what is the root to the solution? And despite the fact that, you know, fundamentally deep in my heart I still know as a distro guy they're wrong. They are not built correctly. You know, containers do bring a huge amount of opportunity to actually solving this problem. And not, and surprisingly, and I'm going to really hate saying this because I know people are going to quote me for ages, not just, you know, OCI containers like you see with Docker, but, you know, on the desktop and mobile side of Linux, you know, things like app image and flat pack and snappy. They may have their flaws because, you know, the way they're bundling things together, the way they're isolating things, you know, aren't necessarily well engineered as a traditional distribution package, but they do give a huge amount of freedom to users to just be able to install what they want to developers to just be able to deploy what they want without having to have to engage quite so much with distributions. You know, yeah, and so, you know, users get their stuff fast, developers get their stuff out there fast. And, you know, despite being such a big distro fanboy that I am, like after a while you look at this and you think maybe this is actually an opportunity for distributions. Maybe we have a chance to actually lower the scope of the distribution to something that's more manageable rather than trying to please everybody with everything and come up with a solution that can just, you know, deal with what we need to deal with and basically leave everything else to be everybody else's problem, you know, just leave it to the container framework, leave it to the application framework to, you know, deal with the user space stuff and we're just dealing with the plumbing which is our core competence. It's our main strength. So, you know, looking at building the community distro for this new age. Inside OpenSUSA we started looking at this problem, kind of, yeah, pretty much the route I've just taken you in the cubic project. I started in 2017 it's a sub project of OpenSUSA, yeah, looking at all this stuff. We then, you know, because we have tumbleweed because we know how to do all of this stuff, we've based all of our efforts on the tumbleweed codebase and effectively built a new distribution derived from that focused on this problem. We're using kubeadm for Kubernetes. We're using the podman cryo family of container tools. We have transactional atomic operating system updates and really heavily customized installation routine because, you know, we're a bunch of geeks and we still want to do like 500 things differently rather than just deploying the same thing all the time. But it's a community project. You know, that's list of what we've been looking at right now. We will look at anything else that anybody else wants. And in fact, there's some examples of stuff that's changed lately that I honestly had no idea was happening until suddenly I had a release announcement for this really cool new feature. And that's, yeah, it's an OpenSUSA project it's how we do everything. The base layer of cubic we call microOS and that's sort of the aiming to be the perfect container host. It has a read-only root FS using butter FS. For reasons I'll explain later. Like I said, using podman and cryo is its container runtime based on tumbleweed. And the kind of the general use cases for microOS on its own without something like Kubernetes on top is like as a single container, a single server, a single machine container host. So your typical sort of developers machine for running containers, testing containers, building containers, stuff like that. Using the features of like being completely automatically automated for updates. So patches itself, reboots itself, takes care of itself, rolls back if something goes wrong. We're currently using cloud in it. I'm probably moving to Ignition soon for actually handling things like bootstrapping the machine initially. Adding things like SSH keys and the like so there's even less effort. And the general idea conceptually is to have all services provided by containers. Of course, that's a lie. You need to have a bunch of services there so the containers can do stuff. But in terms of story, that's the story. And also we've started looking at other architectures. So this is one of those features that I had no idea was coming until I had the blog post. But we have a fully working AR64 port now with all of this stuff rolling, moving forward on AR64. Is it true? Generally speaking, sysadmins never want to touch a running system and yet almost every distribution we do forces them to touch their running system. We apply updates, binaries change, libraries change, conflicts, files change, and the machine starts changing its behavior immediately after the point of that patching. This ends up being a huge problem. It's a dangerous problem. Services are running, users are doing things, and users are half the problem most of the time. It's what they've done ends up breaking what we've upgraded and what we've patched. Software changes things, sometimes on purpose. And as packages, we don't necessarily always get it right. And that's really bad when you've just done an update and some RPM postcripts accidentally deleted the database or something like that. Rolling releases make it even more complicated. This is what we've learned with Tumbleweed. It's changing from system D to system D. It's a huge big change. If you're just pushing out the updates in a rolling fashion, suddenly sysv in it not being there because system D replaced it, that's going to have really weird side effects on your machine while it's running. Major versions of update stacks do it. What can users do if their system suddenly breaks? If they're literally in the middle of doing work and suddenly isn't there anymore, most things stop working. It's even worse when you start looking at the enterprise side of things. On critical mission systems where you've got large cloud systems, high availability, some kind of service moving things around, no one wants to have their service interrupted, but in reality they normally have enough redundancy that they don't care if the system is interrupted. The server itself can turn off because there's three other servers doing that job as well. But still, at the moment, just push everything out and break all of them at the same time. You need to make sure that everything is upgraded in one consistent change. If you have a bunch of new packages, are they all applied the same way? RPM Postscripts are the enemy number one to that idea. It could very easily leave a system in a very undefined state. It might be working. It might not. It might have done the same thing on every system. It might not have. How do you deal with that in a safe and sane way? We were looking at the solution to the problem, the transactional update as in database transaction. Wanting to have a system update that is atomic. Either it fully gets applied or none of it gets applied at all. We didn't want to touch the running system in any way, manner or form. Once it does touch the system, once you have applied the update, we also needed to be able to roll back that entire change in its entirety just in case something went wrong. With Sousa and OpenSousa, we've been trying to tackle this problem at one level for 15 years with all of the work we do with ButterFS and Snapper. On any Sousa distribution by default, we install ButterFS. Our package manager is applied to a snapshotting tool called Snapper. Whenever you patch the system, which of course is touching the running system, we do take a snapshot before and after. Have a snapshot before. Exactly what the state was before the changes were applied. You have the snapshot after. Which is cool for being able to roll back. That part of the problem gets solved. It isn't atomic. Those RPM post-scripts do change the running system. The system changes in flux. It solves only half of the problem. With transactional update, basically what we did was realize that we kind of over-engineered the solution in some respects. The ButterFS, Zipper and Snapshots were, because the system was running, we were patching the running system, we were in some respects doing twice as much work as we needed to. What we do instead is we have the running system and that is actually a read-only root file system. No package manager can make any change to that system even if it wanted to. Then with BTRFS, we make a sub-volume. That snapshot is therefore overlaying over the root file system. But that snapshot is read-write. We then effectively just redirect the output of the package manager to that snapshot. The snapshot gets patched, not the running system. The running system is running and every binary is untouched, and everything is clean, clear and pristine. But all of the changes, no matter what they are with RPM postscripts and what have you, all get redirected into that snapshot, which then when the update is finished, we close and we set that to be the next boot target. When your system then reboots, you're moving in one single jump to the new state of the system. It's kind of effectively a hybrid model of where with embedded devices you have all these images going out and you deploy the image on the next boot. The really nice thing with this is of course it's way more space efficient because BTRFS snapshots are only covering the diff of what's changed. We don't have to carry a whole second image of the OS and flipping partitions or anything like that. We can do duplication over the whole thing. There's also potential for over-the-air updates of sending the snapshot to a different machine so you make sure everything is getting exactly the same update. From an RPM point of view, packaging point of view, it also means we haven't had to reinvent the packaging wheel. We don't have to use some ornate new format. We've developed and get it mature and get it using. We can use the existing packages, the existing way with minimal modifications if hopefully not at all. So we don't have to learn new tools. We don't have to learn new processes. We can basically take all of the skills we've had and all the benefits we've learned from doing this stuff for decades and apply it to this new world where people just want to have a system that moves immutably from one state to another. Doing it this way means of course you also have the benefit of a normal boot time. There isn't handling things like processing OS trees where you're figuring out what am I booting to. And you have the benefit of an incredibly quick rollback. When you boot up, if that snapshot does not work the way it's meant to, you just throw the snapshot away and you boot again and you're back to exactly where you were. So it's a very nice clean way of doing things. Does anybody want to see a demo of this working? Or I can move on. Okay, fine. Let's see if this works. This is fun, doing it backwards over my head. There we go. Nope. Thank you, Libra Office. Oh, come on. I love this. Demo effect in full force. Sorry. There we go. Does that look okay? Yeah, kind of. All right, so this is a standard cubic machine installed with micro OS. So we've got things like Podman installed on there for running containers. But we've got Vi on there as well, which is what I was not expecting. Fun. More than I thought. But we don't have Htop. So we want to install a package for whatever reason to have Htop to monitor the system. If I try and do an old fashioned zipper install of Htop, it's not going to work. It's a transactional server, which unfortunately you can't really see it. It's a nice big red message there, faded at the bottom. Not really, because this is a VM and this is a really dumb shell and I haven't installed anything that can increase the font because it's kind of a very minimal OS. But, hmm? Yeah, I tell you, that's not going to work. There we go. So yeah, Htop's not found. I can't install it because it's a transactional server. So instead, I run our transactional update and hope the Wi-Fi is working well enough. So this is just downloading, creating the snapshot, preparing the update, and so now I'm getting the output from zipper, so it's running our usual package manager, saying do I want to install Htop? Indeed I do. It's installed Htop and now I type Htop and Htop isn't there because it hasn't touched the running system. The running system is exactly the same state it was before I ran any of these commands, which is kind of the point. The only way now of getting Htop on this machine is actually rebooting. Normally we have a service called RebootManager, which literally has a schedule. It also can be set up to do stuff like checking for maintenance windows, so your nodes only reboot when you want them to. In this case though, I'm just going to reboot. Reboot after every action is horrible. Yeah, well, not every action because you're going to get stuff from your container apps that aren't applied to this, so it's only when you're changing the OS beneath. Yeah, so for the recording the question was, congratulations you've repeated the windows experience. My point was not really because it is scoped with focus of just being for the OS, with your applications coming from some other layer, like containers. On that level it's different, and then on the second one of course, Windows isn't atomic. Your API runs, screws your current system up, and then you reboot. At least this way, your current system is fine, and you reboot whenever it suits you. So it's a better model for that. And there we go. Now we have Htop. So that's transactional updates in a nutshell. Let's see if the poor video guys can do with that. And as you saw then, the boot time was, ah, sorry, completely. There we go. So if you're interested in using it, the commands you need to are all there. Very close to what we're used to in a typical zipper environment, so things like zipper up, zipper up, updates the entire package, yeah, transactional update, up and update the entire system. There's also really nifty debug style features like transactional update shell, where it will create the snapshot, and then transactional update will just dump you into a shell in there so you can do whatever the heck you want, then you exit it, and that's your single atomic update. So it's a nice flexible way of doing it, and then like rolling back, just transactional update rollback. This isn't an exclusively cubic thing, so it's used also in SUSE-CAUSE platform for updating their Enterprise Kubernetes distribution. We use it of course in cubic, and it's also available in the traditional open SUSE distributions. So both Tumbleweed and Leap have transactional server as an option, so you can kind of get everything in the standard open SUSE distributions, but with that mechanism for updating. And also coming soon in the SLEE-15 Service Pack 1 as a tech preview as a module in there. With some kind of known issues, because there are some packages which do stuff that's just a little bit unfriendly. Not like PHP might admin writing to SRV where that of course isn't necessarily going to be directed because that's read-write, that's going to change in real time. So it'll work, it just might change something that it shouldn't change. So those will have the caveat of and avoid those couple of packages. Cool features and cool ways of delivering software kind of part of the story. In open SUSE we're trying to talk a lot more about the bit that's really important is actually how we build it. Because it doesn't necessarily matter how cool it is today, it matters how well is it going to work 10 days, 10 years from now, whatever. With rolling releases, we've learned this rule which I kind of try to summarize here as if you're trying to move a complicated software stack the traditional option is you build your thing, you freeze your thing, and then you spend ages backporting stuff on top of it. Which works, but it's a lot of work and that work gets bigger and bigger over time. When you're looking at doing something in a rolling release, your goal should be to be able to effectively throw away your entire software distribution at will if you need it. If that one library requires you to change 100 libraries, which requires you to change 400 other things, you need to have a process that can actually scale to that kind of change so you can just move the entire universe to get that new thing you want in there. And yet still deliver it in a way that's built properly and tested properly and works properly for your users. With Open Susie, we've got a few tricks up our sleeve. We've had our build service now for well over a decade. It's what we use to build all of our stuff. It also can be building packages for anybody else. It's used by more and more people, not just the Linux foundation and VLC, but also now within the container world. Anybody here used Cata containers for anything on any distribution? Shame. I got way more hands in the other room when I asked that question. All of the Cata container packages for every distribution are all built on our service as part of the Qubic project. And now, of course, we're also using it for building containers because it's pretty much, in many respects, a lot of the problems OBS solves of making sure if this dependency over here changes, the entire dependency chain gets rebuilt so you have a consistent offering. The container solutions don't do that, so you end up with all these containers looking around with stale packages inside them. With the build service, we now have to do that with RPMs, so we also have to do it with containers. You can build a container, have whatever packages you want in there, and when OBS notices those packages have changed, it will actually rebuild the container for you and you have nice fresh containers all of the time. Registry.OpenSusie.org, literally any container you build will start using it to play around with that kind of thing. Building's cool, but what doesn't matter unless it works, so we're using OpenQA, we started it. It's a bunch of postscripts, really, but it's a bunch of postscripts we taught to act like a human, and I'm not sure which is worse, but it's the only solution out there that can really test a distribution or test anything the same way a user is going to use it. It can see the screen, it can see the UI, it's aware of which areas of the UI it's interested in, it can move the mouse, it can click everything. So with OpenQA, what we basically do is test hundreds of different scenarios and digging down into those scenarios, actually making sure that the user experience of using the distribution, using the tools on that distribution is acting the way it's meant to work, and the slightest deviation, like for example someone changing the background on Grubb will get caught, will stop the test, so you can actually make sure was this an intentional change or not. With these tools tied together, we basically built what now would be trendily called a CI pipeline for building distributions. And this rough workflow is what we use for cubic, this is what we use for tumbleweed, this is what we use for leap, this is what we use for sleigh, in some form, where any submission gets sent in, gets automatically checked by a whole bunch of scripts and linters in OBS, we then do sort of one tier of OpenQA testing, making sure basically like is this submission putting the entire code base at risk, is it just going to destroy everything and block us from testing anything in the future. Then at that point humans get involved and there's a manual view of the usual kind of checks making sure is this change sane, is it solving the issues we wanted to solve. If assuming that gets accepted it then gets put into what we call factory, where basically it's the prototype for the next release of the distribution, where we then build all of that stuff consistently and we test all of that stuff as an individual distribution. In the case of tumbleweed and cubic, we basically do tumbleweed and cubic in absolute lock, step and parallel, so we then take the isos and the images and the FTP trees that get produced by factory and we test them in parallel in OpenQA because they're all based on the same code base ultimately. Assuming all of the OpenQA tests pass that means everything from KDE and GNOME to Kubernetes and Podman, when they're all sufficiently green, they get shipped automatically to users. It's sort of devops for distributions. And then I've talked about all this testing and all this building and then people say, well yeah, but I run Arch. I just want everything now. I don't want to wait for this building and testing stuff. Looking at upstream projects, I'd say we've got kind of a bit of evidence that we can keep up. So with Kubernetes, they released version 113 December the 3rd. We shipped just over a week later with Cryo, shipping three days afterwards and with Podman I made a mistake and shipped it before they announced it. The process really can keep up. We can build this stuff, test it, ship it, it moves at the pace of contribution just like everything else in OpenSUSA. Now I've managed to go through this entire presentation by only mentioning the D word once, Docker. I'm not a huge fan of Docker and we're not huge fans of Docker inside the Kubik project. For a whole host of reasons which I haven't got time to go into, the simple and short ones are just architecturally speaking, looking at this container stuff from a distribution person's perspective, it's this massive monolithic demon which if it goes wrong, you're completely screwed when it comes to all of your containers running on top of it. You can't manage your containers. If it gets breached, all of your containers are exposed. It's just a huge nasty clutch of solving the problems they were trying to solve. Luckily though, Docker isn't the only answer to really solving those problems. In the Kubernetes land, there is an alternative runtime called Cryo running the same kind of containers. So it's still OCI containers doing the normal container stuff but built for Kubernetes, focused on Kubernetes, with ridiculously more lightweighting comparisons. Not having huge demons running on every machine, but just Kubernetes spawning the Cryo process. The container is a child of that process, good old fashioned Unix philosophy, keeping it simple. That also then makes it easier to tie it up with the other tooling and techniques we've been using to secure our systems for years. Things like SE Linux and App Armor. With Docker, if you want to try and wrap App Armor around it, it's a complete nightmare because you basically just end up poking holes for every single container that you possibly have, just so it can get hold of the resources it needs on the base operating system. With Cryo and Cryo like runtimes, it's just a single process. Each container can have their own App Armor profile which gives that container just the access it needs to just the bits on the system it wants. When you think of a typical container being some binary running some service and despite the dream of everything being stateless, that's never true. There's always some data somewhere that needs to have some access. With this model, it's quite easy to have an App Armor profile that just gives access to the bits of the OS the container needs to see and that one storage location so the container can get to a storage and everything else the container can't see. Life is nice and safe. But like I say, Cryo is very much Kubernetes centric. So if you're running a Kubernetes cluster, Cryo is underneath that thing but it's sort of abstracted away from really interacting with it. You're not going to see anything interesting. For those of us just messing around with containers like on our workstations or on like a standalone server to replace Docker, we're using Podman. It's basically a drop-in replacement for the command line for Docker. It's using the same containers, like I say, just like Cryo. It shares a lot of the same concepts in libraries with Cryo and in fact at some point they're going to be aligned more and probably merge together in at least some form. The syntax is practically the same. Podman run is the same as Docker run. Podman pull is the same as Docker pull. So in fact on my systems I just alias it and haven't run Docker for a year. There is kind of one big difference is Podman has no equivalent to Docker compose because being sort of a child of the ideas behind Cryo, you've got Kubernetes. In Kubernetes land when you have a complex container service instead of having like a Docker compose file where you're defining all the different containers in some YAML, you have a Kubernetes kube YAML which defines all of the containers for your pods for Kubernetes. So basically the same concept but more people are using Kubernetes these days. So that's where Podman gets its name. It's the manager for pods. So it has additional functionality for running or creating pods by hand if you want to. So you can literally start a couple of containers, create a pod without having to write any YAML which is kind of nice. You can then have Podman generate that YAML for you which is really nice because I don't like writing that much. Or you can take existing YAML templates for existing Kubernetes clusters. Thanks. And running it on your Podman machine instead. So Docker compose is missing but to be honest it's kind of not really needed. There's also some extra nice features that people working with Docker kind of really wish they could have things like deleting all of your containers or deleting all of your images. That's kind of another reason why these tools have my heart. Full requests for that have been lurking in Docker's GitHub history for years. In Podman it's there, it works. The downside of all this really cool upstream and open source stuff is when I moan to Podman they tell me just to fix it because that'll accept my full requests which in the past I could just make everything Docker's problem. It's a nice problem to have. And being more lightweight, being a simple process, running on a machine, starting a container, it means you can also do very interesting orchestration kind of things like tying it up with system D so having like system D starting a container, stopping them using socket activation so the container only starts when a user is trying to access it. Which nice features that kind of potential I don't think has been fully realized yet but we can see where we go with that one. Building containers. Podman build basically emulates exactly the same way Docker build works but there's more ways of building containers. So builder actually does all of that in a million different ways so building from scratch, building from images, existing images, using a Docker file, it can do the standard compliant OCI format or it can do the Docker format and you can also do really cool stuff like just taking existing container, mount it make your changes in a shell, unmount it and then actually creating a new container from that changed instance which is just a far nicer way of doing things and trying to manually inject it with other tools like a lot of other places trying to do it. Then once your container is built you've got to put it somewhere, you need some registry or you need to handle that and we have Scorpio for handling, uploading, controlling, deleting the content of a container registry. Outside of all that sort of standalone container stuff, we have Cube ADM which is the upstream Kubernetes cluster so you have the issue of you need to have at least three or four nodes or working in conjunction to run your containers, how do you get those three or four nodes to talk to each other. Lots and lots of people have tried home brewing their own solutions and it kind of got so messy to the point where like Kubernetes upstream started like okay we're just going to build one tool that at least does the basics so everybody does the basics right. We've really embraced that and we're wanting to use that as much as possible the issues we bump into, we're working with upstream to extend it and then if people come up with additional third party additions which don't work upstream we'll look at using them. It was GA in Kubernetes in version 113 just before Christmas and from a cubic point of view it's incredibly nice because it's completely decoupled from the operating system so it creates your Kubernetes cluster in containers so those containers can have their own life cycle can be replaced when you want to replace your containers and the operating system can just happily patch, reboot whenever it wants so it kind of fits in perfectly with our way of thinking of what an operating system should do and what the containers should be doing on top. Setting up a cube ADM cluster is nice and easy. We have all the instructions on our wiki basically you take the cubic ISO, start it up, say you're going to run cube ADM it doesn't matter if it's going to be a master or a slave because it's always actually going to install exactly the same binaries and then the difference will be handled by the containers. On the machine you want to be the master node you run one simple command to initialize that master node. It basically downloads the control plane, sets up Kubernetes properly, configures everything the right way at the moment you have to actually declare the fact that you're using cryo that part of the string will disappear in 114 because we need it to disappear in 114 so you can auto detect which run time you're using and then at the end of that you get a nice text output of your cluster is basically built and done and you have this string here which is the string you run on the other machines so they can join your cluster. It basically includes the discovery key. You need to have some tool to manage that cluster now you've initialized it so there's a couple of commands you need to run to basically extract the key so your machine can be trusted to run the cluster so basically set up the admin console so normally you do that, well you can very easily do that on the machine you're currently working on but if you want to handle it remotely you just take those files and deploy them wherever the hell you like you need a network. In Kubernetes land there are tons and tons of different network layers or using CNI. In the case of Cubic we're testing Flannel mostly at the moment from CoreOS so once your cluster's got to that point you just deploy your Flannel containers or your CNI containers of choice and that set up the network so the other nodes can actually join something. They need some network to actually work together and then once that's configured you go to the other nodes and you run that string and you have a working Kubernetes cluster and you can start actually running your containers on a highly available fabric so things move along between the different machines. It looks something like that. I was going to try and do a demo but I don't quite have enough RAM on this laptop to run like three nodes at the same time. The moment we're working very closely with upstream Kubernetes so using their stuff, filing most of our issues upstream first is not really carrying any patches for any of this stuff in Cubic and it's working out quite nicely because as of two weeks ago Cubic actually became the first open source community distribution certified by the CNCF so it's certified Kubernetes following all of their standards, passing all of their test suites commonly on 113 which is the latest upstream release. I'm hoping with 114 to actually use that whole process of ours of building and testing so it will actually be the first distribution even faster than all the enterprise ones to have 114 in there should work at least all three as close as we can and when it comes to everything we're doing right now when it comes to what next, whatever the community wants. This is working really nicely now the container world is moving incredibly quickly so if anybody has any bright ideas now would be a good time for the ideas or for questions. Thank you. Yes sir? Will there also be like a container version that is more long-term support? So the question was this is is this a container version of tumbleweed? The answer for that is yes and given that is there going to be a sort of a longer supported version of Cubic? I'm not going to say no because when I started working on Cubic literally the first thing I did was change a lot of the structure we were doing to kind of make room for that possibility so like inside the build service we call it tumbleweed Cubic so you know there could be a leap Cubic if somebody wanted one but the more I look at this stuff and the more I'm packaging this stuff a rolling release solves so many problems like I dread to think out of work someone would have to do to get Kubernetes 113 or 14 running on Leap. Podman is actually being quite portable so like I can see some of that stuff translating along like we did with transactional updates but generally speaking for container oriented OS I think the challenge is to actually keep up with upstreams, keep up with where everyone's going but do it in a way that's still stable. I don't see an LTS actually solving any problems but yeah. Are you planning on putting that into Zipper so that sometimes we just run the Zipper command and it will know the right thing to do? So the question was are we planning on putting the transactional updates stuff into Zipper? Ignace are we still planning on doing that? Yeah. He's working on it. Yes. Release date? Tomorrow. We wish, but soon hopefully at least now like in the past we didn't used to have that error message coming up so like Zipper would just fail horrifically at least Zipper now knows about transactional updates and doesn't let you do stupid stuff. Okay so the question was when you're using transactional updates is Zipper running on the base OS or running in like some container? Yeah. Yeah. No. So with this we're basically a cut down traditional style OS so that is typical Zipper, typical packages running on a typical disk of choice and then we kind of make the container problem something higher up the stack so we do the OS part fine we didn't see the benefit of trying to containerize everything. There has to be some line where in our minds at least as a distribution we're responsible for providing an OS. It has to be built properly and done properly and containers aren't the right way for doing an OS. Okay. So yeah. To repeat the question sort of running something in a transactional update like what if like a postscript was trying to get hold of system like yeah accessing system D. We wouldn't allow that in open users so I don't think it's ever come up. We try and build stuff in a way that that would never be necessary but yeah. That would be fun. Yeah. It would fail and to be honest I'd be glad that would fail rather not have that package installed in that way. Yes. How do we deal with incompatible configuration files? We do our best to do a 3-way merge and have snapshots of the we doubly snapshot all the ETC stuff so you know you fix that. Okay. Yeah. Yeah. Fill up. Fill up templates exactly. There's a mechanism fill up templates which will just transform configuration files from old versions to newer versions if the configuration files itself was modified. Yeah. That's what I meant. Yes. Yeah. Yeah. If you want to update if you want to snapshot your data so the question was yeah what if could you snapshot your data also if you want to snapshot your data that's something you can already do in every open source of distribution snapper for anything outside the root file system so it's kind of out of scope for the OS because in the same sense of like we don't want to mess with user data like we don't want to presume what snapshotting policy you want for your data but the tool is there so you could say you know okay this folder snapshot it this often you know and you could potentially hook it into our tooling too. Yeah. So the argument there was you know distributions could know that they're changing data in open source we have a philosophy of like don't mess with user data like so like you know when like MySQL is a perfect example you know when we're updating MySQL you know we expect the user to run the appropriate update scripts we don't run them for them because you know that doesn't seem like our place it seems presumptuous I'd rather you know I'd rather the next boot to have MySQL not work and then the user have to fix it there it's different philosophies I think I'm done for time thank you very much.