 Can we start? I guess we'll start. Give another 30 seconds for people to come in. I'm going to actually sit down because I'm going to be doing a lot of typing. So I hope you don't mind. If you can't see me, just like motion and I'll stand up. I'm going to play piano. It's a YouTube person or an internet person who played this song. If you want to copy, then you can't find it, just ping me on Twitter. I'm James, purple ideal over the internet. Come on in. Don't be shy. Maybe we should... Do we have the Faust Demo Compression algorithm? If there's a seat to the middle, can you stand up and move in so I see a seat there? There's a seat next to you. Can you raise your hand? There's a seat there. Someone wants to sit. Come in, come in, come in. So we're going to have to actually start. I'm making a listen to piano music. So there we go. Someone just came in that wasn't me playing. All right, so I'm going to go pretty quickly. If you're lost or something, ask me a question. I don't have too much time because I only have 40 minutes. I have a lot to go through. But I'll try my best to answer short questions. At the end, I'll have time for questions. And all of this material and all of the code and everything that I'm going to show you is all online. It's all free software. So there's actually a blog post on my technical blog. Oh, that was bright. Someone just flashed me with the camera. Are you guys still there? I cannot see. All right, you're still there. Don't be shy. So there's actually a blog post about this whole thing. So you can check this out on your own. I just released this project about basically two weeks ago. And there's one blog post already on GitHub. There's 260 stars. I don't know if this is a metric for anything. But people seem to be interested. And so I want to share with you my ideas for the next generation of config management. Just quickly, who am I? I'm a hacker. I work on config management things at Red Hat. So thanks to Red Hat for paying me. So please, who's read my blog before? If you haven't, just raise your hand anyways, so I seem really popular. Okay, great. I'm actually a physiologist by training. So if you'd like to talk about cardiology or something that I don't really use anymore, please, I'm really happy to. Physiology is getting back at me by making me sick during my tech talk. So I learned too much about something anyways. And I'm actually a DevOps believer. So if you're not into DevOps, you can talk about it. Someone's like, yeah. Just a really quickly, quick history lesson. So some of you might remember some of my puppet hacks. Does this look familiar? Beaker, setting fire to basically everything. And base, back when puppet .24 approximately, I think that's when I first started with puppet. I think I got fairly good at it. And I did some really outrageous things in puppet because I wanted to do some really powerful techniques and things in the software. And I gave a talk actually first at Lisa 2013 about some of the stuff. And it was very well received. She was like, who is this guy and why is he writing this stuff? Who's done some puppet or some config management? One more. If you're really shy and you don't like raising your hands, just let me know. Anyone? OK, no. Everyone's good. So you can actually do recursion in puppet, which is totally obscure and basically not very helpful. But I did this if you want. There's links at the bottom. I wrote this thing that basically if puppet needs to run again, it detects that and then basically double forks a process that waits for puppet to end and then reruns puppet. So it's a simple hack, but actually really works because if you want something to converge, you have to do it in shorter than 30 minutes. So that's the thing. I wrote timers. Why you want timers is a totally different topic, but I did this in puppet. I actually built finite state machines of all things in puppet. So if you actually want to describe certain type of state transitions, this is just a model here, for example, but you could do this with some nasty puppet code that I wrote. But is this really the right way to do it? No. No. So this guy basically sees my puppet hacks. And what does he do? He's just like, nope. Not having any of that. And there's the nope cannon, which powers up and, you know, done. So nope. We don't want to do this at all. So eventually after a lot of helpful discussion with a lot of people, people were like, just write the thing that you want. So I wrote this tool. And I'm going to tell you about the three basic design primitives that are available so far. There's a lot more coming. But in short, the first thing my tool does is it has parallel execution. So if you imagine a graph in puppet or any other language, it basically does something called a topological sort and runs the whole thing in serial. That's totally crazy. So I do this in parallel. I'll show you in a second. The second thing is an event-driven. I'll talk more about this. And thirdly, the whole system works as a distributed system. And so I'm going to cover these three with some examples. So this is basically what a graph looks like in puppet or any config management system really. If you look at this graph, a normal engine will go through and follow this arrow. Can you see the arrow? And it'll basically just say, OK, I'll do this, this, this, four, five, six, and then seven, which is actually perfectly reasonable. It meets all the dependencies. But what we actually want to do is graphs. We can actually take disconnected parts of a graph and run the part on the left at the same time as the part on the right, right? Totally makes sense. I don't know why anyone didn't do this before. But in addition to that, once we've run this part here, 1a, we can actually run 2a and 2b actually in parallel at the same time, right? Because there's no longer dependency lock on either of them. And the system figures it out automatically. I knew there was a word there somewhere. And then lastly then, 3a will run. So I'm going to give you a quick example. So this is the actual graph generated from the tool. There's basically three exec types. Each of them is going to basically be a 10-second sleep. So the exec is going to run sleep for 10 seconds and that's it. And this one here is just has no dependencies. So if this were to run, how long should this take to run? Someone call it out. 30 seconds. 30 seconds, right? Because this one's not going to depend on anything. It's going in parallel. 10, 20, 30. And so let's just go to this example. So I actually need etcd running. This is just an implementation thing at the moment. But graph 8. So what I'm going to do actually just to explain, so I'm going to run this tool. Let me just actually make sure I have nothing nasty in here. Oops. Turn down the blooping. That's annoying. And so what I'm going to do is I'm going to run this tool. And because I want you to see how fast it actually runs, when the graph has converged, after for five seconds, that's this 5 right here, I'm going to ask it to quit. So run until you're converged, wait for five seconds in case something happens and then exit. We're going to see how long it takes to run. So I'm going to run it. So immediately see right here, exec 1 and 4. Both start up and start running at about 23 seconds. I'm going to press enter here so you can see it go by. And then about 10 seconds later, you'll see right here, 1 and 4 both finish at 34 seconds. And number 2 right here starts up and runs the second sleep. Now sleep doesn't return the output, so it's empty. 10 seconds later at 44 seconds as it is. Exec 2 finishes, exec 3 starts, and it's running for 10 seconds, basically doing nothing in this case. Exec 3 finishes, 1, 2, 3, 4, 5, and there it is, the tool finished. And basically the whole thing ran in 35.018 seconds. So this tool really has zero overhead, or basically as little overhead as possible. And that's one of the things we'd love to have. Any questions so far? Am I going too fast? Are you afraid? All right, good. So we're going to continue. Yes. All right, I'm going to get to that. Ask me at the end if you still want more information. Yes and no. Let's go back to that. It's a good question, but we're not quite there yet. So this is the second part. So in config management, typically what we're used to is seeing something that runs every 30 minutes or every five minutes or however long it is. And you run at time 1, and then you run again at time 2 or 3 or whatever. But if something goes out of state in between when you run, you don't notice it until the next run. And so if you want something to actually respond immediately, this can actually be quite useful. So here's what we do. We actually, for file types, we actually use I notify watches or FA notify in the future. And actually detect when something changes, we're going to immediately respond to that check. So the very first time we run and we start up, we check each resource, but we take these watches. And so we never, ever, ever have to recheck that resource until the kernel basically tells us that something might have changed. And if something has changed, then we do that work to recheck the resource. So for, let's see. Drive just opened here. That's weird. Ghosts. So for files, we use I notify. For systems, for services, we've used basically the SystemD Debuss API. For exec, we're using the kernel and so on. So I'm going to actually just give you a little quick example of this. So, oh, this guy is still running. So, oops. So, okay, so my tool doesn't actually make directories yet. And so, okay, so we have nothing in this directory. And over here, I'm going to show you, we're going to actually run a graph. All right, so this is a very simple graph. It has three types, three files, MGMT in this folder, F1, F2, and F3. And each one just basically says, has content, I'm F1, I'm F2, I'm F3. And in addition, we have this F4 file, which we've asked to be not present. So if this file exists, we want it to be gone. And so we're going to just run this tool. Same thing again. But this time, I'm going to leave it running continuously. And so I'm going to run it on the left. I want you to see how fast it finishes. So I press enter and it's already done. You can see right here, files 1, 2, 3 have been made. And on the left, you can see that they're actually there. And I can actually, cat, see that all the files have the right contents. And you can actually just remove F2. And oh, you see, it's already back. So if you look on the left, when you remove a file, it comes back on the right. And you can actually do things that are kind of fun. If you want to mess with your sysadmins, you can actually remove the file and cat F2. And you'll see that even before you have a chance to read it, it's back. And so you can do this as much as you want. And that's what it's doing. It's actually taking you to the desired state as soon as possible. And if you really want to get a little aggressive, you can actually run the watch command dash 0.1. 0.1 is basically as fast as this thing can run. And you'll see that as fast as you run it, on the left, the system notices and takes you back to that desired state. All right? Any questions? Yes. You have a question? That's a good question. Ask me after. Right. So both good questions. One question is show auditors. I'm going to talk about that in just a second. And a million files is a complex thing. But we'll talk about that too. Basically, you can't really do that. You can fall back to polling. But if you're monitoring a million files, chances are you're doing something wrong with your config management. So that's probably the bigger problem. And just to show you, if we actually touch F4, you'll see it's gone, right? Same sort of thing, right? Touch F4 and file F4. It basically works right away. Dimitri has the biggest grin on his face. I think he just basically met alien technology and is excited. So let's just go back to the slides for a second. So what is this, actually? I'm saying that this is config management. But what is this really? Does this look like anything else to you? Am I doing? Have I merged two technologies? Bernadies. What? Bernadies. Bernadies? No. What is this? Not yet. That's not what I'm getting at in this case. What? Anybody? Last guesses. So I think this is actually, if you think about it, it's actually monitoring, right? So if you think of any type that you write, whether it's a file or watching some particular system value or something in PROC, or anything that basically a traditional monitoring system watches is config management, right? In this view of config management. So instead of having an infrastructure where you build all your monitoring types in one language, in one application, one code, and everything else in config management in another place and try and glue them together, we do this all in one code base in one language. And the nice thing is that when certain things trigger, because they're changed, we can automatically use config management to say, how do we bring them back to the state that we want? So bringing back to a certain state could actually be sending an email, for example. That could be one of the steps, for example, bringing you back to that desired state. And so that's really for a future. So let's just talk about this third thing. So this is the distributed topology. Any last quick questions? This is where it gets more exciting. So these first two ideas actually are quite important for this third idea, and I'll show you why. So this is a typical topology that many people are comfortable with and know very well. You have a server, and you have a bunch of clients, right? Yes? Say it together, yes? All right, good. Thank you. That's the only one here. I have scarves for people that show enthusiasm. So, yeah, there we go. So you have this server, client topology, very well known. What are problems with this topology? It's a great topology. What are some problems? Single point of failure. Anything else? Bottle necks, right? There's also some advantages, right? It's very well known. Let's look at a different topology. Also, different advantages, disadvantages. This is an orchestrator. Orchestrator means, as far as I'm concerned, central orchestrator, okay? There are some hybrid modules where you temporarily elect an orchestrator, but ultimately, when you're talking about an orchestrator, in my terminology, which I hope you love and adopt, there's one thing that goes out and pokes a bunch of other things. This is a great, very useful topology. This is something that you'll commonly see with Ansible, where you have one orchestrator that goes out and pokes a bunch of machines, right? What are problems with this topology? You can't necessarily reach all your clients. It's actually an excellent argument, which I haven't even brought up in my presentations yet, but that's definitely scar-worthy. Can I have a loss? Oh, sorry, that's the best. I'm not a pitcher, I'm a hacker, sorry. But that's a really good point, actually, and we should add that to the slides. There's other problems, too, but there's also some advantages, right? This is very simple to reason about in a lot of cases. And that's, I think, one of the reasons why orchestrators are so popular, because people just reason about the simple case. We also have to actually have a way to do these complex cases in a understandable and easy-to-understand way. And I plan on doing this, but that's not what I'm talking about today. So this is actually one topology that I was proposing for my tool. It's called MGMT, because I can't think of a better name. But so if we have, say, six peers, and each one directly connects to each other, what's the problem with this scenario? Anybody? Scream it out. Demetri, what's the problem? Too many connections, right? Turns into, like, 100. How many connections are we going to have here? Yeah, like, a lot, right? So that's not going to work. So instead, what we actually do is the topology is actually this. So there's that same, everyone is a peer, everything is distributed, but we transiently elect some of those machines to become temporary masters in this cluster. And those masters basically run at CD, because I want to have a distributed key value store that holds the data. And the nice thing about this is, whoops, you can kill one of these and promote some other one to be a new master, you know, as the need arises, right? And obviously, in the future, you'll be able to sort of say that I want at least some masters in this failure domain and some masters in that failure domain and so on. But that's basically what we have. And so I'm going to show you an example. So we have, I'm going to show you three different hosts. It's all going to be on my one machine, but pretend it's three hosts. And each of these is going to create four files, okay? Two of those files is going to create locally on itself, on its machine. The other two of those files, it's going to push not into the local graph, but into the distributed key value store. The other thing it's going to do is it's going to look in this cloud database, basically, thing, and it's going to pull down any files it finds and pulls it onto itself, right? Was that confusing? So I'm one machine. I'm going to have two files for myself. I'm going to push two up and pull down however many are there. All right, so how many will that be on that first machine? Oh, you're all shot. What? Two go up, two stay here, and I pull two down. Someone said four, so that's four. So let me just kill this. I'm going to just start up this example so you can actually see this. So I don't need this. I'm just making one directory for each machine. So you can see there's nothing in these folders right here. And actually, just for fun, we can actually do this to make it so you can actually see what happens live. And so I'm going to run this first graph and see how fast it runs. So basically, instantly, we see those four files. So two of it just puts those two on itself. The other two files don't get put locally. They get pushed to this graph, but right after that it reads from that graph and pulls them back down, so it has those two. I'm going to do that exact same code on the second machine. So what's the second machine going to have? Does anyone know? It's going to have six, right? Because what it's going to do is it's going to put two on itself. It's going to push two more into that database, but now there are four files in that database. And when it pulls it down, it's going to have that original two plus the other four. So it's going to have six. Also, that first machine is going to notice that there's new stuff in that database and it's going to pull those down, because we asked it to do that. So we'll run that and we'll see how fast it runs. And boom, basically almost instantly, we now have six files on each machine. So what we're actually doing, I don't know if you noticed this, we're actually exchanging data without any actual orchestration. So we're only listening for events and responding for events. And the nice thing is that when we recompile the graph, we don't have to wait on anybody because it's waiting only on events and it runs in parallel. So things that don't need to be applied don't get reapplied. And we're going to do this with one more machine. I'm going to run this one, I don't know if you can see. I'm going to run it exactly like the first two, but I'm going to time it and ask it to exit after five seconds. So as soon as it converges, wait five seconds, and as long as nothing happens, exits, just so you can see how fast the tool actually runs. So we're going to run it on the left. How many files, by the way, on each machine should we see now? Eight. All right. If you're a little confused on this, don't worry, look through the blog post. It should be a little clearer. So I'm going to run it on the left and basically, one, two, three, four, five, and right there at the bottom, can you see that? Five point one, three, two seconds. So the system is very, very, very quick and you should see eight files on each machine. Cool? Any questions on this? Yes. Yes. Right. So ETC uses something called Raft. It's basically came because there's this great algorithm called the Paxos family of algorithms by Leslie Lamport and he invented all this stuff in like 79 or something, but it was too complicated for someone to code or basically most people to code. So some clever people made a simpler version that deals with this consistency and it's called Raft. So I don't implement that, obviously. I basically depend on other tools that do that for me, which is why we can have some nice things like this. Any other questions about this example before I move on? This is actually, if you're familiar with Puppet, who's familiar with exported resources in Puppet? Yeah. If you're familiar with exported resources, this is basically the same model except it applies right away. So there's no waiting. It's basically just waiting for events and coming back. And unfortunately, it's one of those things that Puppet doesn't seem to love. I mean, it's actually a great design but implemented badly. And so I think with this engine it actually works quite well and it's quite powerful, which I have to demonstrate in the future. So in the future, so this tool is just a prototype. It's not ready for production use. So there's a lot of stuff still to do. The most notable thing that's absent is there's no DSL. So the way you actually describe these graphs is something that I haven't built yet. I'm not a languages expert. I don't want to screw it up like the Puppet language. Luke himself said something like, you know, I shouldn't have designed the language, but I didn't know better. So hopefully colleagues and people on the internet will help write Alexa and Parcer or figure out what language to really use. I've looked at a few other things. I've looked at FRPs briefly. I'm not a language specialist, so I don't really know what the best choice is, but it probably will end up being something that's declarative. So if you are a languages person, please reach out to me and help shape what this is going to look like. A lot of stuff left to do. Is Richard Hughes here by any chance? All right, I want to, like, pressure this guy. He's a really great guy. He has this great project called Package Kit, and that's one thing that I haven't wrapped yet. So for files and services I've wrapped, but packages I haven't wrapped yet. So I'm planning on using Package Kit. There's a lot of other types like that to add exec improvements, add a timer type. And you could actually have types for arbitrary things. Like you could have a type for virtual machines. You could have a type for Docker images or anything else you can think of that you can wrap. All we need is a service that provides, basically, events and a way to action that item. The SCD process itself is actually eventually going to be embedded in this one binary. It's all in Golang, so it's quite elegant that you can do that and have it start up SCD and stop it as needed. That hasn't been implemented yet. I started working on the code, but it turns out SCD doesn't do this themselves yet, so there's no stable API. So that's coming. And this is, right now, this is just a community tool. Basically, I stole this idea. Steph Walters, the cockpit guy, they have a great project we should check out, but basically, you know, he said this one sentenced me, which actually stuck with me. This is a project. It's not a product yet. So anything that you want in this, you just have to send the patches and do the work. And if it makes sense, we'll try and make it work. So I definitely want your thoughts. How can you help? Because this is about you, not about me. You can use this. You can test it. You can patch it, share it, document it. If you love writing documentation, I am terrible at this, but love your help. If you want, if you're on GitHub, you can start it and sort of show that this is useful to you. You can blog about it. Tweet it if you're on Twitter. You can discuss it with everyone, discuss it with me, and hack on this. This is really what's going to make this happen. It's just really working, me working on this at the moment. This is why my one marketing slide, Red Hat's quite nice and they keep paying me, and they send me to things like this. So if you want to give them money, please do. And this is, like I said, this is an upstream community project, not a product right now. And so let's just recap. Answer. Now let me recap. You seen this guy? He does this, like, math and magic show where he recaps his pen. It's great. If you haven't seen this, check it out on TED Talks. So there's the technical blog of James. Please check it out. On GitHub, the project is right there. It's all open source. I have just one article on the topic at the moment, but I'm purple idea on IRC and Twitter and GitHub and all these things if you want to reach out to me. Please vote. I found there's a feedback link. So it would really, really appreciate it if you take, like, one minute, three minutes, I don't know how fast you are typing. Go to this link and tell the conference organizers that you like my talk or not. If they really like it, maybe you won't have to sit on the floor next time. I don't know. No promises. And if you have any more questions, I'm happy to answer questions. Will... The question was, will I integrate this with things like satellite? So that is something you have to ask the satellite team in casual conversations. So first of all, this tool is not ready at all for that stage, but if the satellite team is open to adding another provider, then it's something that could definitely be done. I personally do not know the satellite code base or the foreman code base, so I probably won't be involved directly in that, but if that's something that you really want and you know the foreman code base, please, you know, send patches. That's basically how this works. I have actually a bunch of time left, so ask more questions. The gentleman in the back. How does the monitor... Right. So the question is... Right. The question is, how do you merge config management monitoring? So that's a good question. So for anything that you might want to watch in a monitoring system, if you create a native type for whatever that might be, then you could build a graph in a DSL, which doesn't exist yet, that describes what to do when you have certain failures. For example, if you see something happening, you could follow the graph to see how to action on that in addition to potentially sending some emails for monitoring and so on. Right. Right. So the lovely thing about this graph, so there's actually, if you look up high... Answer. Well, that's that guy. If you look in these graphs, so if you're familiar with Puppet, there's actually the concept of Notify, the notify or subscribe pattern, and there's the require slash before pattern. So in my tool actually, there's actually a simplification. There's only the require. I mean, you can have the arrows, but there's no need for notification arrows because that's implicit. So we automatically know about what's happening because of the events. So anytime you have a relationship, it'll automatically send an event, and that thing will know if it should operate or not. So if one of those types were perhaps an email type, then you could have that be a dependency of the thing that you're watching happen, or anything, right? So I'm not going to describe how you should fix your system. I'm just trying to build the tool that will allow you to express that into an easy-to-reason about way. Any more questions? Yes, Aaron? Yeah. This is really cool. We're just starting to establish patterns that we don't quite know yet what the language should look like. Right. So I won't repeat the question, but I'll try and answer. There are some patterns sort of sketched out that should make things sort of easy-to-reason about, which I haven't described yet, but I'll announce as soon as I can have time to write them all up. As for the language itself, if you actually think about it, this engine is actually a super set of what something like Puppet actually provides. So you think about the Puppet model that runs for 30 minutes and then ends. My system basically does that if you just run it in cron and verge timeout equals one second and then run again in cron. That's basically Puppet. And in fact, there's a gentleman, I've mentioned this to a bunch of people, and someone thought this was such a great idea, someone has already started writing a trans compiler to take Puppet code and run it on this new engine. So in theory, this should work. It's not something that I'm personally working on, but it's something that you could definitely do. So all of the existing patterns in Puppet should be identical, and that's what we do here. The other thing that we do I think a lot better than what you can do in Puppet is the multi-machine problem. And that's really what's most interesting for me, because if it's not a multi-machine problem, you're just writing a very fancy bash script. And so that's not very interesting. So that's why this is important. Any other questions? Yes? Yeah. So how do you handle outages between machines? I mean, you're bound by like cap theorem if you have a complete disconnect, those machines cannot talk to each other. The only way you can really deal with this is whatever code you're running on those machines, as long as it knows what to do, depending on how it sees machines coming or going, you program that and describe the failure scenarios that you want in the module for whatever thing you're wrapping. So if on disconnect you want to basically shut down the machine or something, you could program that in. If you want to on disconnect, you know, generate some sort of error warning or flashlights, it's entirely up to you. You get basically what you get with RAFT and NCD, and anything specific to what you do in the scenarios is entirely up to the programmer. Yes, another question in front. So you're saying I have a walking problem. What walking problem? The arrows? These ones? This one? You know that we can run too. So in your DSL, you express the relationship of what you have to run before what else you want to run. So as long as you express the right relationship, you will get that. I mean, if you write bad code, if you create this file and then create the directory it's in, you'll just get an error, but that's an error in any DSL, so you can obviously write bad code that won't work. This doesn't solve any magical problems there. So I don't know if that's what you're talking about. If you have, we can talk more after if you're not sure. Yes, gentleman on the floor. Yes. So the gentleman is asking for slower convergence. Basically, so what you're describing is an interesting question. So you're actually interested in knowing how do you upgrade a whole bunch of machines but one at a time. And these sort of complex patterns will evolve, and I think there's a nice way to express these in the DSL, but this talk is not about that. That is a next step which I've been thinking a lot about and I have some answers, but not for today. So please stay tuned. Yes, any more questions? Really good, important questions. Yes. A request. Send me a patch. I would love to not invent a new DSL. If you're a language person and can recommend what language I should use, then I would love to not invent a DSL. I don't know the answer to this problem. If I can avoid inventing a DSL, I would love to, but I need to find a language that fits the requirements for the tool. That's the first goal, yes. That will be a core feature of the tool. So this code isn't finished fully. I've been hacking on it a bit and I've been talking to some of the core OS people who have actually been very helpful in answering questions when need be. They haven't been writing patches for me, mind you, but that's okay, I mean I'm hacking on this. If you know the LCD internals and you're good at Golang, please let me know. Do you want me to show you one really quick demo, one last tiny demo for a few minutes? Yes? It's actually not. It's not at all. It's not a feature. If it was a feature, that would be amazing. I discussed this with them briefly. They said they looked at it briefly, but they didn't really write the code. I will probably be writing some of this code or you will be writing some of this code. If it's something that can be mergeable into core, I'm happy to do that. The protocol is actually very simple. The leader decides who should be promoted or not in modular failure domains. Console. Yeah. The question is why not console, basically? I have a great FAQ on this because it keeps coming up. Why did I use LCD, why not console? Console basically is a distributed key value store almost identical to LCD. They both use RAF, they're both in Golang. Why didn't I use console? The truth is it's actually pretty arbitrary because they both could work. I just randomly decided that LCD fit a little bit better than console and I liked how they were running the project. So if there's a problem in the future, we could actually switch it. If you really don't like LCD and you want to write the patches to make console a build option, that's potentially doable, but it is ultimately arbitrary. The technologies are very, very similar. You might not agree. You don't think they're the same. That's okay, but I think if you look, you'll actually see that what we're using, the Gossip Protocol, we're just using the distributed key value store. So both would work. Any other questions? I can show you one quick demo if you want. This demo, do you want to see one little last demo? This demo, there's actually a small bug that I haven't fixed yet. So this might crash, but if it doesn't crash, then you will see it. If it does crash then I'm sorry. Just type my root password. So I have this service right here that's basically just running sleep for six hours. And I'm just going to stop it. It's actually a very simple service. I wrote it myself. SystemD is so easy. And so you'll see it's not running. And just over here, I'm going to just kill some of these. Oh, that's what I need. And I think is it Graph2? Yeah. So I'm just going to run this on Graph2. Graph2 is just basically a simple thing that makes a file and starts a service. And we're going to run it. And so I have to actually, vector slash temp, mgmt. All right. So that's, there we go. So you'll see that the service is running because it was stopped. It noticed that it was in the wrong state. And if I actually go here and just stop it, you see what happens? And I, oh, it's running again, right? So if someone changes the service, it's the same thing. It's the same thing as before except for services. And that's basically the same idea, right? See, there's actually a race. And that's it crashing right there. But hopefully, you saw a bit of it before it finished. So that's the sort of thing worth monitoring. If your service dies, you can actually have it restart the service and do some action to do that. So if you want to help me fix this bug, this is basically me not understanding Dbust properly in Golang. So I suck at that. Last, last question. Or maybe we have, we have two more minutes. Yes. Yes, it does. But you have to add that to the service. So you might not want that. You may or may not want to do that sort of thing. Depends on what you're doing. So whether you use this feature or not in your application is entirely up to you. Yes, any more questions? What about non-system D systems? I currently don't support them, and I have no plans to unless someone wants to write the patch. Yeah. This is a Linux-only system D required, package kit required project for now. So if you're running on a Mac, this is not supported. For now, yeah. I mean, this is a prototype to build a core set of working things. If in the future, someone says, yeah, I have Windows servers. If they want to provide those things, but it's not ready yet. So that's, those aren't supported at the moment. So that's it. Thank you very much. I hope you like the talk. Please vote. And yeah, that's it.