 Who am I? I'm a hacker. I work on config management stuff. Come into the room. Don't be shy. I work, I read a technical blog called the technical blog of James. Who's seen my blog? Just raise your hand. If you haven't, just raise your hand so I seem really popular. Everyone? Yeah, if you, I've given some other talks and I use all the same jokes. I really don't have time to write new jokes. So, I'm actually a physiologist by training, so if you want to talk about cardiology and everything like that, I'm happy to. Some background. Everything in the config management space was pretty horrible and so I was sort of looking at better solutions and stuff like that. This is the graph of MGMT, awesomeness over time, as you can see. I had a little hard point in my life, but otherwise, up into the right, yes? Is this the kind of graphs you want to see? Am I in the wrong graph dead room? This is like all about expel, shred speeds and graphs. Are these the kind of graphs you want to see? Let's see what the guy says. This is my guy who's going to answer this question. Is that the kind of graphs you want to see? No. I got you, right? Did I scare anyone? There were some confused faces that were like, oh, this is spreadsheets. No, this is not the kind of graphs. We're going to be talking about DAGs. Our logo has a nice DAG in it. I know it's a DAG because I checked it when the designer person made it and it was not a DAG and I was like, no, you can't have it. Does everyone know what a DAG is? You're like a pretty advanced graph audience, so I can use tech terms, yes? Who's shy? Just raise your hand so I know where you are. Anybody? Excellent, good. So I'm going to talk to you about our software. If you want at any point me to go more technical into about the graph theory stuff, I will. I've kept it more as demos and about the software as opposed to pure graph theory, but you're going to see how it really comes into play. I'm going to tell you about three graphs. First, just our software, my software. It has two main parts. It has an engine and the language. So the language runs continuously, which I'll show you, and it outputs to the engine, which actually does that work. And how does that work? So the engine has three main parts to it. It runs in parallel. It's event-driven, which I'll show you, and it works as a distributed topology. So these are the kind of graphs that I'm talking about. So in the config management automation space, there's this concept of declarative work. So what we typically do is we think of these blue boxes as resources, and they're a unit of work. So one might set the contents of a file on a server. This one here might set up a package and install it. This one here might start a service and so on. And we describe these units of work. And then the black arrows here represent the dependencies. So one has to happen before two, two before three, and so on, and so on. And these are all DAGs because it's a dependency graph, so you can't have loops, right? And up until basically now, all of the tools basically did this. If you can see this light red arrow, they basically did something called a topological sort, and I know you all know what that is, which just said we're going to do this in this order. But the first big thing that we do differently is we actually run the entire graph in parallel. So everything at the left can run at the same time as everything on the right, because there's no implicit dependency. And even here, once we've run 1A, 2A and 2B can both run in parallel. And then 3A will finish in the synchronization. Does that make sense? So also in parallel, this is all about you helping me. So if for some reason you have a better algorithm than what I'm using to run through these graphs, you should please tell me, because I am not a good algorithmist. It's very true. So I have a demo of this, but it's not so exciting. I just want to tell you, this is the graph number one. This is dependency graph in our engine, graph number one. Three graphs, right? Yes, cool. Like the infographic? Yeah, all right. So I'm going to show you actually running a graph with simple stuff. So all these things have events. So I'm just going to actually show you the first one. So I just made a very simple config. So I'm going to show you the code, actually, if you want. It's called hellofostm. And it's just basically we declare, is that big enough for you to see? Yeah? Basically we just have a file resource. We have it, it has a name, it has some content, so some parameters. So we want it to exist. We want the content to say something. And this is basically a vertex in our graph. And the code will basically produce a lot of these. This case there's only one, so it's a very simple graph. And when we run them, I'll just run it on the left, over here. And on the right, we can actually check. See, it actually made that file. Oops. So we can cat the file, and we can, like, it's there. Works. But here's the cool thing about MGMT that's different from every other tool out there. You actually remove the file, and I list, you can see it comes right back. So it's actually running in real time. So if I remove the file, oops, and it comes right back. So I can even remove the file, oops, remove. I'm going to sit down a bit to type. Sorry, I'm still here, getting tangled by this microphone cord. So if I remove the file and cat the file, you can even see the engine is so fast, it puts the file back before the second part of bash even runs. You can see it's actually running. So it's sleeping, and it detects when it runs, and it goes through. In fact, it's even so fast, there's this watch command. So if I run 0.1, this will just run the command over and over again really, really fast. Oops. It'll run it really, really fast, and you can see it's constantly running it, and it just fixes the state of the file. This gets much more challenging when we have this being a whole dag of dependencies, because at any point in the graph, once it's run, or even part way while it's running, some resource might say, hey, the state is no longer valid. So we have to actually work backwards through that graph and refresh that dependency, rerunning it, before we continue down to the rest of the dag. So it's like this giant race condition, but it actually works, it's amazing. Yeah, if anyone is completely lost, just don't be shy. If it's important, just raise your hand if it's like something that I've totally lost on, because I want you to enjoy this. And this is more like infrastructure software. Everyone good? You're afraid? Are you excited? Do you want to see more demos? So that's it. So we do this for all kinds of resources. These resource permatives, these vertexes, vertices in the graph, they can be virtual machines, they can be files, they can be even AWS instances if you want. So I think this is traditionally what like, config management, infrastructure automation, but actually it's another thing as well. Just for fun, does anyone know what I'm getting at? Don't be shy, I'll pick on you. Anyone? I think this is actually a bit of monitoring as well. Typically anyone in system in? This sort of people? Who's a graph theorist? Should all be raising your hands, right? Typically in infrastructure, people spend a lot of time to deploy their clusters and then add monitoring as well. So if we can actually build these two things as well into them, then it's quite cool. So because all our dependencies are resource graph, we can actually use graph theory and do all sorts of cool stuff to make the graph safer. So if there's anything that we can do algorithmically to decide what should happen, we can. So what we actually do is we actually analyze the graph in real time and we can determine if for example, someone is installing a package and a file and a service, we know that the service is gonna have to be something that comes from that particular package because there's a configuration file that says this service comes from that file. So we will automatically add an edge between those two vertices, even if you don't specify it, that guarantees that the dependency takes place so that the graph runs correctly. And we do this for all sorts of different things. So if you're executing some command as a certain user, there's a user resource that will make sure that it has to be installed and set up before it runs the command and so on. Make sense? Yes? Don't be shy, don't be shy, interact with me. So here's a little graph tricky question for you. So here's a little graph that I drew. It's not very pretty. But this graph actually has an optimization that we can make. So the blue vertices are packages that we wanna install. The two red ones are file and the green one is the service. So does anyone know what we could do to improve this graph? So if we're gonna run this graph, so running things in parallel, how could we improve this graph? Yeah? And what, sorry? It starts a quick package in one point. Yes, exactly. So we can actually look at this graph so the user might specify this graph as code and we look at it and we actually can rearrange it and completely, it looks like a different graph but in fact we've just grouped these blue vertices on top of each other because it turns out in a lot of software like Puppet and stuff like that in Ansible, we'll install the first package, start up the package manager, shut down, do the second one and so on. And that wastes a lot of time because there's overhead of each one. So we actually can analyze the graph and say we're gonna group these three things into a single package install command which makes it go way, way faster. Yes, good question. I have a question regarding my style. Yeah, me too. You can do yours when you're running the installer command. You're installing the package. Oh, no. That's impossible. Yeah, that's a really good question. That's actually, that's basically impossible unless you, because like it's a dynamic thing at runtime that you decide. So that's sort of something you would catch in tests. Right, it's not something, but if you think that you could catch it statically then you should like tell us. But yeah, great question, but we don't do this today. But we do this and the cool thing is it makes things way faster. So obviously this is optional. So there's a flag if you don't wanna do that per resource, it's easy to do. Is that cool? You wanna see a demo? No? You wanna see a demo? Yeah. All right, come on. So let me just run this. I'm actually just gonna show you, actually I'm just in my password as password if anyone's listening. I'm just gonna run the first part of the demo. So I actually basically had that graph that I just showed you. It's not the best demo because you can't see what's happening, but basically you can see that the engine looks here and it says, oh, it can group these two packages together and then it can group these two together. So it actually does like an iterative thing where it takes two groups them together, then it takes those two and groups those to another one and so on. And we can check that this all works. Let's say, hey, Cowsay is working, which is excellent. And you can even like have fun with Cowsay doing like Cowsay, Cowsay, and like on and on forever. And we can't do this anymore because the cow people are gonna be angry. So moving on, let's talk a little bit about the language. So the language is the thing that feeds into the engine. And the language is very special because it allows us to declare the state of our infrastructure in real time. In the olden days, in the puppet days, people basically assumed that all our servers were static. You set up the servers and this is the database server and this is that server and everything stays the same. But in fact, this is not real life. Load changes all the time. We wanna shift things around. Failures happen. We need dynamic infrastructure. And so I've built this special language which uses graphs, hint hint, foreshadowing, which uses graphs to actually describe this over time. And I'm gonna show you about this. So this is sort of the idea. We want this powerful language but also something that's very safe, yeah? Does everyone know what a DSL is? Yes, what is a DSL if you don't know? Yeah, so it's not a general purpose programming language. You can't use this to like do some fun stuff. It's just a very specific domain specific language that's useful just for this infrastructure problem. It has some cool properties. It's very safe language. It's very powerful, which you'll see. And it's easy to reason about because you wanna write a small amount of code that does lots of amazing stuff. So here's a demo that I'm gonna show you. I'm just gonna run this over here. Oops, I think it's this one. So I'm gonna run it on the left. And here's what's happening, okay? So try and follow this code a little bit. Over here I have this date time function and I'm adding it to whatever this variable is. But wait, the variable is down here which is the product of all these numbers. And then this variable goes into this value over here. And then down here I have this load function which is just getting stored in a variable and also put into there. And then whatever this function produces. And then I group all these things together into a big string and I put it into the file. So what's confusing about this? Everything, what? What, louder? Dollar sign, all right. I found the Perl user, what else? What else is confusing about this? Let's actually run the code. Let's actually see it running and we'll come back to this example. So it's running on the left, MGMT is running. And what I've told it to do is remember just to make a file with all this output of all this stuff. And just to show you what's happening, watch is just a command that I showed you before. It just runs some program over and over again and I'm just gonna cat the contents of that file so you can just see in real time what MGMT is doing to that text file. And if you look, look, this number is going up every second. It's doing that computation and it's printing out a date that's going up every second. You can see this load value, which is the system load, which the kernel recalculates every five seconds is going in there. And there's something weird going on here. So what's actually happening is all these functions, this is called a reactive programming language. These don't produce single values, they produce streams of values. And every time that they decide independently that there's a new value, it'll recompute just the parts that need to be recomputed and then feed those into those new values which get recomputed and so on. Ultimately, this string at the bottom that is the file contents gets recomputed and then MGMT says, oh, I'm reactive, put this in the file and it happens to be doing this constantly, in this case about every second because the date time changes every second. Does that make sense? And that's what we're seeing here. In fact, just to show you can use any sort of data source you want, I even wrote a function, this VU meter function and what this is actually doing is actually listening to my actual microphone right now. And if you see, if I make noise, you see it goes up. So you can actually use this to model real life things. So if we're really quiet and then I point to you, you're gonna make some noise, let's see if it works. Come on, you can, every other talk I've given in made the thing peak. You wanna try again? Okay. Yeah, all right, good job. So why is this useful? The reason this could be useful, this is kind of a joke example and that's why I like to show it, is imagine your infrastructure in your server room, you have a bunch of microphones and in the office, if you hear a lot of screaming and loud noises, it might be a fight. So you automatically set all your infrastructure read-only for an hour and then nothing bad happens. But anyways, we gotta talk about graph theory or the organizers can be very mad. So what's interesting about this code? What's weird about this code? Apart from all the reactive stuff. Anything weird? Like if you wrote this code, would someone be mad at you? And why? Yeah, it doesn't win. Doesn't work on what, sorry? Okay, on a leap here. That's a good point. Yeah, so this is not date time safe, but it's just for fun. Date monotonic is coming in the next version. So any other questions, any other ideas? So it's super out of order, right? So like if you look, like this is down here but then that's not even defined and that's one of the weird things about our language. The language itself, the flow of data is actually also a dag. So you could, and we actually allow this in the compiler, you can write it out of order and it does work. If you do this in terms of readability, you're a bad programmer and you should be ashamed of yourself because there is the topological sort of code that you should write when you're writing the code. But it actually does allow it because it's actually a real graph. It's a dag. We actually use the same graph library as well, so definitely a dag. And so remember all the, the way the compiler actually works is after it lexes and parses, it goes through and it produces an actual graph and then actually just before that it's type checked, it's a type unified, the types are unified and then the graph actually has very specific types. So you know statically that this vertex produces a string that goes to this other vertex that consumes a string and so on. And just graph theory. Is that cool? Yeah? So don't write it out of order. Oh, there was a clap. Did you show the graph? What? Can you show the graph? Oh yeah, I can. I don't want to do that right now but because they're huge. So everything basically has like a huge graph. I shouldn't actually show the graph. I don't know why I didn't. So this is graph number two. Ta-da, tail of three graphs. You want to hear about the third graph? Want to see more demos? More demos or more graph theory? Demos? Okay. We're going to show you just a quick thing we're doing with this graph because it's kind of fun. So because everything goes over time, so here's our date time function again. We actually can store a value like this, DT equals whatever date time produces. So then each variable allows us to actually look back in the previous value or the previous value before that and so on and actually make determinations about past values as well. And so just as a quick example of this, if I just run this here, just go here. Same thing, I'm just outputting to a file because it's easy to visualize. Oops. There we go. So we're just running this program and printing out the values. So this is the current date time value. You can see the time a second ago, a second ago, a second ago. And that's really quite useful. Does anyone know why? There's laughter. The data changes. What? The data changes. Yeah, so let's be more specific. What can we do with this? Take the HSI. So, good question. So I'll show you a little example. So this is a picture that I took with my parents' crappy phone from my house. What is it? It's a thermostat with the natural Canadian units of Celsius, the world units that everyone uses. I have a clearer photo, which is unfortunately in some sort of weird units and I can't figure out what they are because no one would possibly want their house at 70 Celsius. But this is a thermostat and what? Yeah, it's a sauna. Actually, someone made that joke already. I make the jokes here. I will find you. Only my jokes are allowed. Yeah, so these things have a very interesting property. Does anyone know what that is? Besides keeping your house warm. Yeah, hysteresis. You saw my talk? You're just a brilliant scientist. So it's hysteresis, definitely. And what hysteresis is, is basically, if you had a thermostat when it got to 20, it switched off right away and then dropped like 0.1 of a degree and switched back on and then heated up and it would just flap very on, very on. It would flap on and off very quickly. Tick, tick, tick, tick, tick. And it would break your heating system or at least annoy you with all the clicking. So in fact, there's this property called hysteresis that allows you to hit a threshold and then descend past that threshold by certain amount, either in time or in distance, before you go back up and turn it on. And so I'm gonna actually show you a demo of this if you'd like. You wanna see a demo? Yeah. Come on. You wanna see a demo? Yeah. You are shy. Who are the shy ones again? Me. All right, excellent. So I'm just gonna run this demo on the left and I'm just gonna show you here. So what I'm doing, I'm running just again a watch command that's gonna print out this text file and then I'm gonna run the verse list command which just shows what VMs are running on my system. So by running this code on the left, it actually started up two VMs and you can see the current system load and a threshold. And what I've programmed it to do is I've said when the threshold hits 1.5, you're gonna actually shut down one of the two machines basically to move that machine to another host somewhere. And I've also added some hysteresis. So 10 seconds width. So after it goes below 1.5, it's not gonna turn the machine back on until it's been below 1.5 for 10 seconds. And then it's gonna reschedule it back on the machine. So I'm just gonna like basically if you have a lot of computations going on, you have some noisy neighbors, stuff like that. This is something that you might run to say, okay, there's too much load on this machine. We're gonna split it across more machines. So it's getting higher up. Pour a little laptop, you can see a super ghetto. And I'm just gonna heat it up like crazy. Sort of create more load. Oh, there we go. So we hit 1.7 and you saw the first MGMT2 shut down. So we're just gonna kill all these terminals. Okay. So we killed that. So watch when it goes below 1.5. We're gonna see what happens. What it went, but it didn't start up the machine yet. So 10 seconds are gonna go by. That's five, four, three, two, one. Boom, you see it? Came right back. You get that? Is that cool? You can clap if you want. So these are the kind of fun things you can do with this graph-based language. Maybe it's not pure graph theory, but I wanted to show you this stuff. And the code is really simple. You can do this in like 10 lines of code. I'm just gonna kill these two machines so they don't need them anymore. So let's go back to the language a little bit because this is where our graph runs. So this is more on the infrastructure side. The language itself and the engine are actually two separate pieces. They're still compiled together, but if you had a problem in the graph, like had some runtime error, which is theoretically very, very rare, but it could happen, like hardware failures and other things, the engine could keep running. So it gets a stream of graphs and that's how actually this works. Every time the language runs, which is a graph, it's producing output, which again is that first graph and it's producing a stream of those first graphs. So in fact, what we actually do is we push that graph to the engine. The first one will run. The next time we produce a graph, which could be an hour later or a millisecond later, we actually have, we do a kind of a graph diff to see what has changed in the graph. We pause the running graph, we switch them over and then we only recompute the state for things that we need to. And so this is actually another point of contention. So the algorithm to write this, I'm not an algorithmist as I might have mentioned, it's kind of a little hokey and it seems to work. We're not sure if there's a more optimal way to do this. And so I'd invite you, we have a bunch of test cases, it passes, everything works perfectly, but it would always be great to find out if there was really a much more optimal way to do this. But fortunately it hasn't been a problem at the moment. So that's a really good question. So the size of the graphs, we haven't actually built any huge enormous graphs at the moment because it's still kind of a new tool and there's still some features that are missing for like larger customers or people basically. So at the moment it's mostly been all smaller test graphs. I actually have found some performance issues with some very large graphs, but typically our goal is in the thousand or 10,000 sort of vertices size graphs. They're not like million host graphs. The graph library is actually a library that I wrote which is part of our code base. So if you are a graph purist, I would love for you to look at it and tell me how bad it is. It uses adjacency mapping and seems to work great. So yeah, I talked to you about this. So the code actually is a graph itself. So the language doesn't care, it works great. If you write out of order code, you're definitely insane. So there's some things like variables are immutable. So what it actually does is actually, if you were to write code like x equals five and then x equals six, this would be a compile error because we want to stop the programmer from making types of dangerous choices that could do like hide a bug or something like that. So we do that. I talked to you about hysteresis and the different kinds and the whole point about everything being reactive is we model real time systems. So real systems change, there's error scenarios, all sorts of stuff like that. So I like to do this. And the most interesting thing, the new graph that I, they keep popping up. I try and stay away from graphs and turns out he's my graph library again. So the way our imports work actually are also a graph. So you can't actually have circular imports. And so this is actually graph number three, tail of three graphs. So it turns out it's very easy to do this. I actually just store a graph as I'm going through the modules. And every time I add something new, I do a topological sort. And if there's ever a loop, we know that the import is obviously importing something that imported it and so on. You all know how that works, yes? Good. Easy stuff. In your graph, do you kind of look at just library dependencies like library name or do you also include the versions into there? That's a great question. So it's, there isn't actually a distinct concept between the two. It's really about what people import. And when you import something, it's pointed to a specific code base and optionally like add a specific Xiaowan commit. So technically someone could import a different version of that, same code. But no, it wouldn't, like if you have the same code in twice, it doesn't matter as long as it doesn't loop back to the same thing that you already imported. So, and it works great. There's some more stuff I can show you and I have a lot of time. So I have some other demos and things we can talk about in graphs. There's actually another graph that you might find interesting. But I'll just go with this stuff first. So there's a lot of stuff that still needs to happen. So in the engine, there's some interesting graph analysis that we're not currently doing, which someone who's on the science side might find more interesting if you're really into graph theory. So basically analyzing the graphs and seeing if there are behaviors that would cause oscillations in the code. So basically if you have this reactive language, you might have a situation where something happens that causes something else to happen that causes a loop and sort of has this re-entered behavior. It turns out this is actually useful in a lot of cases and you actually want this re-entered behavior, but I'm fairly certain that there's some sort of static analysis we could do of these graphs to decide and find certain dangerous scenarios that a programmer might have made a mistake. So if you're really into graph theory and you're interested in that sort of thing, you might find that quite fun. So in the standard library, there's a bunch of functions that still need to be written. So if you're writing some code, you might be missing something. One of the things I had in mind actually is when you are building out a cluster, I thought about having actually a graph module in the language itself and so that as you run MGMT and as it automatically moves to new machines, it builds a graph data structure as it goes and maps out your infrastructure and the connections between machines. And I'll show you a quick demo of something very similar, if you'd like to see. So what I'm gonna do, so I don't have the graph example yet, but I'm gonna show you sort of the step before that. So I'm gonna run a bunch of machines on MGMT. I'm gonna run each one. What they're all gonna do is each machine is gonna join the cluster and because they work as a distributed system with this raft algorithm, is they can all share values and they can do this in a safe way. And if I just watch, I'm just going to just show you what's here. So the first machine, what I'm gonna do, they're all gonna run the same code, they're gonna start up, they're gonna generate a random value and they're gonna put that in this database, the shared database. In this case, it's just a map with their host name and the random value. Now if I start up a second machine, so this is the second machine, they will basically, if you look at what I'm typing here, it's just giving it a host name, H2, so they can run multiple on the same machine. So just tricking it and all you do is you give it the IP address of the first machine or any earlier machine in the cluster and then just because they're all running together, I give it different ports to run on and then I just give it the code to run. So the code here is very simple. It's a very simple, let's just comment. The code is right down here. It basically just generates a random value and then puts that value into this exchange function. Again, it's a reactive function which will output the values that everyone else also put into that function across the different machines that I then just print out. So let's see what happens when I run the second one. So they're gonna cluster together. So this first entry is from the first host perspective and the second entry is from the second host perspective. So you can see they've shared each other's values and if you add another one, you'll basically have the same thing. The third one comes and you see that they all wake up and get the new value basically right away and you can keep doing this. So as each machine joins the cluster, it can interact with that data store. Now imagine instead of this map, that data store was an actual graph stored in this shared data structure. So as you are a host and as you join another host, you actually build that graph. It wouldn't be a dag. You build that graph of the whole topology and so I think this could be something that we could maybe consider thinking about in future data centers where you just rack a bunch of machines and let the software automatically decide where the efficient ways to send traffic back and forth and who should talk to who and stuff like that. Does that make sense? It's a bit far out but sort of the idea I was thinking. Any quick questions? How much time do I have left? So again, there's still a lot of stuff left to do in the project. A lot of bugs, things like that. Let's talk about all of you for a bit. How can you help? You can use this, you can test it, you can patch it, you can document it, you can start on GitHub if you're into that thing, you can blog about it, you can tweet it, you can discuss it, you can hack on it, all these things. If you wanna write graph stuff specifically, there's a ton of graph work that we'd love to have done. This project, I left a relatively cool tech company to work on this because I wanted to see how far I could do it. Unfortunately, I'm just living off my salary to hack on it. I wanna keep it all free software. So if you wanna send me patches or code or money that's definitely welcome, I started a Patreon and funding a hacker is very sexy because the alternative is I make it all proprietary and then you don't get to see how the graph internals work and that's not fun. I have some time for perhaps one last demo if you want. But how much time do I have left? We have nine more minutes. So we can do demo in a week for the questions. I'll take just a brief pause for some questions now in case anyone wants to talk about anything that I can show you some more stuff. Yeah, go ahead. That's a great question. The question was how do I, how do you extend this new resources? It's actually very easy. We have a resource API it's called. There's basically four or five functions you have to implement and you can write a whole resource basically in like 200 lines of goal line code. Some less, some more depending on what the resource is. There's about, I think maybe almost 20 resources so far. You can show you. So we have, you go in the engine folder resources and these are all the resources we have now. So we have August, EC2, a Cron resource, a Docker container resource, an exec, a file. We have actually, this is a graph thing that's not done yet. It was a graph resource. That's a secret for, it's hence the work in progress. So sh, a whole bunch of stuff. Endspawn, print, vert machine, user group and so on. So a lot of stuff in there and very easy to add. So if you were to look in, I don't know what's some code that's not too gross. See like user for example. You can see the code is, it's really, the whole thing is, this is like 400 lines. A lot of it's, some of it's boilerplate because this resource does some fancy stuff but like to just actually, where the resource actually does the work is in one function where you actually just set the state that you request and then the event part is in this function called watch which is basically a main loop that just runs and detects when the state changes by whatever mechanism you like. So very easy to write a resource. I have a bunch of contributors that I've trained from knowing nothing about the API and being proficient in like, you know, a day or a few weeks depending on how complex. Yeah, yeah. Question. It'll be for a while now. Cool. And I've picked it up and used it and we'll keep their installations out because we are coming from a huge public environment. Yeah. I'd like to get rid of it in the future but I don't see it in the end. Yeah, so it's still, it's still quite new. So it's like really on the borderline of being production ready for what you're doing. For small usages, I would say it's start, it's time to start playing with it and doing some very small setups and finding out what you're actually missing because if you don't actually get your hands on it, you won't know how far you're missing. And also, if you are missing just a few small things like small functions, then you can add them to the standard library and then you're unblocked and you help the project move forward faster. So I'm starting to use it now for some small things. Finding bugs, my next big project is actually more test case infrastructure to catch more bugs that slip in and regress and stuff like that. So just make it more polished. That's the sort of thing. I have a few more slides. I'll just go over quickly. What's just recap? This is another bad joke. This is Arthur Benjamin putting the cap back on his pen. So it's not very funny. There's an IRC channel if you're all on IRC, mgmtconfig. We have a Twitter account. There's a mailing list that gets really low volume for announcements and new people. There's a technical blog of James, you all know about it now. There's some other videos about the engine and the language. You can contact me, Purple Idea on IRC and Twitter and all those sorts of things. If you wanna take your photo, sweet. There's more stuff later today. So I submitted a whole bunch of talks and a whole bunch were accepted, which is absurd. So I have even another talk later today about virtualization, so I'm gonna show more fancy vert demos and cool stuff like that. Not so much on graphs specifically. And then tomorrow, there's a five minute talk and another talk on containers if you're interested. And on the fourth, I'm giving two talks that are gonna be much more advanced on this stuff in Ghent. So there's this great conference called Config Management Camp that happens 30 minutes away. So you should come to that. For people like yourself who are interested about companies, we have a hackathon on the sixth in Ghent as the last day of Config Management Camp and you can get your hands on the resource API. We'll show you how to build your own resource up in hacking and hopefully having a lot of fun. Quick question, if you like this talk, I need you to help me out. I need you to take like two seconds at the end and bother this gentleman named Michael and say, thank you very much. I really like James talk and if everyone bugs him for five seconds, it's gonna be really funny. Also, so I get distributed denial of service. If you go to the schedule page on the FOSTEM website and you click on the schedule, you can find there's a submit feedback link which is super secret and hidden. You can actually tell the FOSTEM people, hey, I like this. And if you like, I also have some free stickers. If you'd like a free sticker and you promise to use them, they're super expensive, but I'll give you one. So just come up at the end or I'll be outside in the corner. Hey, there's Soshan. He's an MGMT contributor. Thank you very much. We have four minutes for questions or random demos. Or yeah, go ahead. I'll give an answer. Hopefully if not, you can expand on it. So like reactive languages are actually not new. They're almost entirely used for UI stuff, like in the web browser. So you describe what happens in the buttons are shiny. I thought this was boring because I'm not into UI. And I thought this was cool because it was a new, in my opinion, novel use of reactive languages. But for example, there actually are reactive languages. I think a very early one was in like a military helicopter where they had all the control inputs be just inputs. And then the reactive language integrated all those to build the control services. So like the helicopter would do the right thing depending on how you moved it. So yeah, that's definitely a user interface sort of thing where you have that. I don't have a helicopter personally and no one's probably gonna give me one. I hope that answers your question, but if not, we can talk more about other uses for it. This is used for infrastructure stuff. You could write resources to do all sorts of absurd things that have nothing to do with what I care about and that would work just fine. And you can use all the graph logic and all that stuff built in to do the work, right? So that's the theory. Yeah, gentlemen over here? If you have a big what? Sorry? 1,000 of resource. Okay. 1,000. So I haven't tested it at any absurd skills, but the lovely thing about that first graph that we talked about, that dag, is we actually run everything in parallel. So there's obviously the memory limits of your hardware and how many things can actually run. However, once the graph is running, most of the time, all 1,000 resources aren't all changing at the same time. If they are, something really wrong is with your system. But in practice, some percentage of them might be changing in real time and since it runs in parallel, it's only that small subset that actually is gonna run at the same time. So I think you'll actually find it's pretty fast and a lot less memory consuming than any other piece of software out there in this space. Does that make sense? The question was, is there stateful information that you wanna keep? So yes and no. For the most part, no. However, you actually can share some stateful information in the at CD graph. That was like the data that's exchanged. However, and this is kind of a weird fun property that worked out. It turns out that virtually all so far of the information can be regenerated from scratch. So even if the Etsy database is lost or corrupted, all those values come from code, from algorithms. They're not user input, so they all get regenerated as well. So it's very nice actually. It's for the people that are into that, we use actually it's a CP system, but technically we could probably run as an AP system without any problems. Yeah, go ahead. Can we, what, sorry? Oh yeah, so the agents, you do run it on multiple hosts. So you run one on every server that you wanna have be part of the cluster. In fact, I didn't talk about it in this talk, but the raft algorithm and MGMT has code so that if you have a failure, you can respond to those failures and re-elect new masters and stuff, and that's all automatic. So yeah. So how the code is deployed? So basically you start up on boot, MGMT on each server, and they automatically cluster together by having every new machine point at any existing machine in the cluster. At the moment you have to actually specify any existing IP address, but in the future we'll build this in and do something like VRRP, for if you're familiar with the network thing, where you have an IP address which moves around virtual IP to any machine. So eventually it'll be 100% automatic, at the moment it's mostly automatic. Good question. Yeah, so you can actually, so what you can do if you want, I like it when it's always running, but you can actually run it, and once the graph has converged for some number of seconds, then you can shut down. Good question. I think that's it. Thank you so much. Thank you. Thank you. If you want a sticker, come grab a sticker.