 Cool if you do want to hack If you do want to hack recommend getting some sort of Linux VM or vagant VM downloaded The other talk ends at like what? Everyone awake hey Brian you're awake nice body You're not awake you're gonna wake you up There will be a little bit of fire Because I don't know sometimes I go to really boring talk. I don't know I didn't miss I didn't see the morning talk So doesn't count in this situation. I don't like sleepy and then everyone's on their laptops And so we have to wake people up a little bit to get them to like pay attention for five minutes Like a distributed scheduling algorithm. Who cares about like who likes distributed scheduling algorithms? Just raise your hand if you don't it's okay So like let's say you have like a cluster of servers and you want to like form a cluster together But the cluster form operation is kind of expensive This is like a conference room algorithm, right? You wait if people are coming in if no one comes in for five minutes You start but if you see someone come in okay, we wait one more minute You wait one more minute and you keep sleeping until you achieve some critical mass and then you're like, okay Fuck it. We're starting and then if someone Comes in after that that lock then you say, okay We'll wait till say four people are here and then we'll all let them in the room and all of them join the cluster or whatever so Now you're distributed systems and years and you didn't even know at the whole time Some of you probably are distributed systems engineers and I'm gonna tell me later about how my algorithm is wrong for some reason That's okay. Hi I For soccer has to hold up science for me I have like two hours, I don't think you need to hold up ten minutes I just finished Recorded or no, it will be a string So do I need the microphone or no, that's the mic. That's the mic. Oh, okay. Yeah, I mean you don't need to check one, too I've actually hacked into the Wi-Fi and they all have files on the computers already I'm gonna what No, the course of the project not for my personal phoning your computer tools Thank you very much for coming. Good morning Let me introduce you James to him I Thank you. Hi everyone. Good morning. Sorry, it's early So just a little while I work for Red Hat I probably know Red Hat because you're here and they pay for me to send pay to send me here, which is nice Actually recently switched teams. I'm technically in the storage engineering team now And I almost forgot to change us on this like cover page that I have So I'm gonna try and do automation relating to storage but one of the biggest problems in automation and to fig management is that people sometimes try and solve Sorry, not to figment in automation if people try and solve very specific problems and I really wanted to step back and try and solve the general problem and find the tools and description language and way to express The general automation problem that I want to solve and then once I have that solution Apply it to different problems in particular. So you then apply that solution to storage or you know, whatever other thing in your infrastructure Do you care about just out of curiosity? Who doesn't know anything about config management at all? Just raise your hand All right, who's really shy and hates raising their hand at conferences. Just let me know where you are Anybody know Who knows stuff about config management? Everybody. All right. So if you haven't raised your hand, I'll assume you're still sleeping So I'm gonna actually sit down for part of this talk. So if you can't see me, don't worry. I'm still here in front But if you have questions, let me know. We have a lot of time. So I'm not gonna race Usually race through the talks, but don't be shy Here's just another little quick video of something that happened in Montreal recently where I live in Canada So it's kind of slippery that bus is actually sliding. I thought this was really cool You might have seen this on the internet Some people have I was the one who posted it on Twitter first And it like ruined my Twitter account because I get constant notifications like someone liked your tweet like three weeks later, so If you work for Twitter, please make a mute button Yeah It's not even that sleep of a hill. It's just we had this like freak sudden ice street thing And no one had the winter tires on yet and everyone forgets to drive like the first time it snows Here goes another bus This isn't about config management by head to show you quickly come in And then the other bus goes again This is worth seeing right It gets a little bit better because then they send a snow truck to He's spinning his little salt machine, but it's not gonna work Okay, sorry, now we have to get serious so Me who went out the night before and ice that road it took me all night, but I did it so just Let you know so I'm a hacker. I work on config management stuff at Red Hat I write the technical blog of James who's seen the technical log of James before If you haven't just raised your hand anyways, so I seem really popular. Go on for the video All right. I'm actually physiologist by training. So if you want to talk about cardiology I'm really more than happy to I haven't really done a talk on cardiology in a long time So I probably forget most of it But if you do like that stuff, let me know and I'm really big DevOps believer So for DevOps is means a lot of different things a lot of people But in particular I focus on tools and having good tools so that you can hopefully make your DevOps DevOps processes easier And that's sort of what we're gonna talk about today. So some people might have remembered some of the puppet work I did from a long time ago. I know Brian probably does so I Don't know if you can hear the sound, but this is just a sound of beaker and the Muppets screaming because everything is on fire So I started hacking with puppet a little while. Sorry a while back around Lisa 2013 I gave a talk on some of these sort of advanced puppet hacks that I've done And I think I got fairly good at it Here's some of the things that I actually built in puppet you can actually do recursion in puppet So you might think oh recursion, but the puppet language if you don't know is actually Declared of so recursion is not like a normal primitive. It's not a typical imperative language, but you actually can do this It's not very useful it turns out So puppet runs every 30 minutes Which is sometimes a pain if you want something to converge that requires multiple puppet runs and indeed you do for Distributed systems require multiple puppet runs So you can actually write this thing code is here at the bottom If you really want to look where it runs puppet and then detects if you need to run again And if so it double forks or it forks off a Python process, which double forks away Watch the parent process till it finishes when that finishes it then forks off a new puppet process right away from that Initial one so just sort of forces puppet to run over and over again if you want to and you can describe in the code If that's the case You can build timers you might want to do a similar sort of thing where you do that same double forking but You actually wait for you know an hour So this would be sort of something like where you would build a cluster of say DRB D hosts Wait for them to converge which takes a little initial sync for them to sync together and then run puppet again to do the next step And so on like that And you can even build finite state machines in puppet which turn out to be actually quite useful But you've missed everything I'm just gonna come in You can actually build finite state machine puppet which is actually kind of useful for doing some more advanced operations But these are really really slow and they don't work very well That's a great question do the puppet devs hate me or love me So I'd like to think that they love me I think that the first time I presented this stuff the greater puppet community had never seen or mostly had not seen any of My work, I think a lot of people were shocked So they're like oh my god like I don't think they even knew that their their language supported recursion in the sort of Weird way, so I think I definitely you know got some nerd respect But you'd have to ask them it's definitely So the question is yeah, is these are these things the right thing to do that's really what I'm getting at and you're on point there So I think they were good concepts for me to explore to see what could have been possible in a distributed in a Automation setting but the the real question did come down to it like we're writing these putt these hacks in puppet The right way to do things or not. What do you guys think? question everybody No, so I will see what this guy has to say. This is my my friend who helps out a Nope guy So yeah, so the answer is I think that ultimately no the core engine was just sufficiently Like I see that I wasn't really able to make it be viable Like I wrote some code that worked and did a lot of stuff But I don't think it was ultimately very viable. So anyways long story short after years of coming up with these ideas I sat down and actually decided to write what I thought was the correct design and so I now have a tool called MGMT So I'm kind of shit at logos and stuff and this is what I had just for like slides But someone made me a nicer logo So we even have stickers which are like crazy expensive and I don't know a lot But if you want to stick her and use it at the end come out here be a sticker So oh yeah, we have some community stuff, but I'll tell you about that later So there's three main design principles of the tool just out of curiosity who's seen this before my stuff And everyone else you've seen a bit. Okay, I'll go a little bit faster So there's three main design points of the tool and I'm going to show you with the live demos How everything works and then some more stuff. So the first thing is that the graph that we have on each machine It can actually run in parallel The whole system is event-driven Which I'll show you in a moment and the whole thing can work as a distributed system Which we're gonna show by example as well. So I'm just gonna show you a first graph So these blue boxes you guys see in the back. Okay? Yeah, cool the blue boxes Take this chair with you So the the the blue boxes represent the resources in MGMT or whatever and the black lines represent dependencies, right? So you want to specify some dependencies where something has to happen before something else And if you were come in see the scheduling algorithm. It's should we start over from the beginning Come in. We're just starting sort of Just play the video again play the video again. Yeah, we'll play it after so the The blue boxes represent the resources the black arrows are the dependencies between resources And if you want to actually run this graph what puppets and other tools do they actually do something called a topological sort So if you see this little red arrow here They basically just go through and pick one thing at a time because they can only run one thing at a time and Go through and run that graph. Can you see that? Okay? No, yes No, but the thing is if we could run in parallel Which we can is you could actually imagine that everything on the left can be running the same time as everything on the right And everything on the left once 1a here is finished running 2a and 2b can both run in parallel Wait till they're both finished and then 3a can run make sense Think just basic CS. Nothing particularly complicated. So I'm going to show you I'll show you this demo So here's just a very simple demo. I've got three resources here Some package installation that takes 10 seconds some service operation that takes 10 seconds And this exact operation which takes another 10 seconds. Here's a longer operation that takes 15 seconds So if I were to run this graph, how long would it take to run? Scream it out. Don't be shy Would it 45 so it take 45 if it was running in series, but I told you everything runs in parallel So how long is it going to take 30 seconds right because it's the limiting aspect so Was that oh you watch Your point is taken I guess I don't need IRC right now Okay, so I just built in a fresh copy. Can you see that? Okay? Okay, so what I'm gonna do is I'm gonna time this and I'm gonna run It's just a the animal thing is completely temporary. I'll talk to you about that later I think this is the example so the engine runs continuously But what I can do is I can ask it that after everything is converged to shut down. So I can say very Time out if for five seconds So if nothing happens for five seconds in the graph shut down and I just have some dev environment options I need to add Okay, so we're gonna time the whole thing. So we expected it to take how long 30 seconds plus five seconds of waiting for the time down So we're gonna run the whole thing and time it so we should see about 35 seconds Optimally plus any startup so it starts up it starts running and you can see right here You got the 15 second thing and the 10 second thing both running at the same time All right, so this is gonna go through The first 10 seconds finish and the second part here that one in the middle starts running Five seconds later. You see the 15 second thing finished running Five more seconds. You have that last exactly the bottom running. So it's going 12 seconds going through three two One finishes that finishes running if nothing happens for five seconds. It should shut down Hopefully and there we go. So the whole thing ran in about 36 seconds. Make sense Okay, so hopefully you can believe me that actually runs in parable. I know you're asking about the package stuff We'll talk about that later So that actually the truth is that the way I did it for this particular case These were just simulated operations with a timer, but but we do Yeah, but we do actually do something cool with package Paralysation, which I'll show you later. So here's the second part. The second thing is event-driven. So who's say more familiar with puppet? Ansible maybe chef we have some chef users a lot of Ansible users So the thing with all these tools is they basically pull right? So the tool starts up and runs so puppet for example run every 30 minutes Check the state go through everything finish and then 30 minutes later run again check through and go through everything all over again Ansible on the other hand does the same thing but on demand when you ask it. So please run now Stop go away come back a week later run now or come back five minutes later run again, right? So in MGMT we actually event-based so actually are always in continuous operation using an event system And I'll show you how this works So the benefit of this obviously is if the state on the machine changes and you need to put it back Somewhere in the middle of your two runs you won't know until the next run with MGMT You'll know instantly and will react to that Similarly if you want to change something it will apply right away So we have a few examples to do this Just open up a window here. Actually you have a directory ready. So on the is that big enough or do you need bigger? Okay so So I'm gonna do I'm gonna run MGMT again So I just have this dev environment stuff this temp prefix and And so what I'm gonna do now I'm gonna show you an example Where I am I'm gonna ask MGMT to manage three files The first one is gonna be f1 the second one is gonna be f2 and the third one is f3 Okay, each one is gonna have contents. I'm f1. I'm f2 and I'm f3 Okay, that's what I'm declaring Please make this so and this fourth file here. I'm temp f4 is gonna have No existence. So I'm gonna say if you see a file temp MGMT f4 make sure it's gone make sense So I'm gonna just that's what we're gonna run. I'm gonna start this up. You can see on the right here There's nothing in the directory So I'm gonna start this up and then I'm gonna go over here as fast as I can and type ls and you can see It's already done running. Okay, so it's really really fast engine You can cat all these options and see that the files are actually as I said But because MGMT is running continuously you actually remove f2 and ls And you see that the files are right back where they are right remove f2 Comes right back instantly. You'll see the engine sort of waking up and fixing that right away But it even works so quickly you can even remove f2 and cat f2 and as fast as you run this It always fixes the state right away Cool and in fact those of you who are familiar with the watch command Watch is a little program that runs things over and over again really really quickly And the fastest it will run is every 10 seconds or 10 times a second So you can actually just say watch please run this over and over again, and you'll see very very quickly It's constantly running deleting the file when MGMT is so efficient. It puts it right back cool So that's basically how it works any questions Correct Correct that you don't have to run it continuously So if you really wanted to run in sort of agentless mode or just run temporarily in cron You could always run it wait till it converge and use this converge timeout option that says after a minute of doing nothing Shut down and cron will run it again But if you do want to have it run continuously then you can do this so you can think of MGMT in a way as a bit of a Super set more general version that lets you Simplify to make a puppet like or sensible like machine just to show you this is real you can even Absolutely, I'll show you that if you want after So you can even echo. Hey devconf and Put it in F2 and cat F2, and you'll see that it always goes back now There is an inherent race someone you're probably asking about the race or something like that But the engine actually always gets to run a last so in the steady state your machine will be in a converged Situation question. Yes So the lovely thing is we have great kernel hackers who have built facilities like I notify and other things So this uses I notify it turns out that this is not a big deal If you had like a billion files that you were monitoring yeah, that would be a problem But if you were doing this with a billion files chances are you're probably not building your system correctly And even if you were watching large sub directors of things There's a new API called FA notify that can help deal with large numbers of files and some other stuff So yeah question. Yeah Absolutely Hang on a text. So you got I got your question. I'm gonna answer that in a second and you have another question At the moment the algorithms is very simple So we're just blasting on top of it But if you wanted to write in a code a more efficient way to change the file or edit the file like applying and if that could be Done, too Yeah, and so yeah to answer your question, which is quite good I'm just gonna show one more quick demo just to show you also if you were to create file F4 it would get removed right away. So you can even touch F4 and file F4 and you'll see the same sort of thing happens All right So to answer your question more fully so we do we have an event system for every kind of resource primitive and MGMT And we haven't built most of the resources will up will end up having so for files We use I notify for services. We use system D events. Okay, so that's how we do systems services We get to kernel events when we exact command and it gets output We have package kit that gives us package events. So I'll demo that for you at the end as well So package kit is a great little facility the other cool thing about using package kit just while we're talking about it is we actually support the package kit API not the Yum or DNF or the package or anything specific So out of the box the moment we implemented package kit We actually have support for Debian and Fedora and all this other weird stuff all at the same time and being Cross distro compatible is a major design goal So our goal is to always make sure we support Fedora and Debian in get master. So and it's quite easy to do this Yeah, so no system D and package kit on the machine. No joy. Well, if you don't have system D You're probably using a just deprecated operating system and we don't support that If you do really have some event system for managing services that you want to add a patch for we can talk about it But we want to build modern systems. So we need modern facilities and We actually have a vert resource that we use with vert D, which I'll show you about for so so I think that this is what I Have config management have a question question ask tell me Sure Tell me everyone want to know Yeah And all the code everything that I show you all the code is all online So you can hopefully look at it send me patches and so on so just to get back to this So I think what this is this is what I think is config management, but other people Have used other words for it and like choreography and weird things like that But but I think this is actually in addition to config management another technology as well I'm not entirely but partially is anyone have any ideas what I was thinking of Just as a side benefit you get this config management system, but you get something for free as a bonus What? Not orchestration. No, I Don't know what that means someone else What auditioning that's like when you're talking front of I'm not going to build systems what I'm really getting at I might have heard it is actually monitoring right So you think about how like system in set up production environments They set up all their config management and you want to put this in prod But you're not really supposed to put in prod until it's monitored, right? So if in your config management system, there's some level of Monitoring to sort of check the state of different things that you've declared in config management and fix those and potentially even a lurch You have something changed unexpectedly. I think that's a little bit of monitoring It's not a full monitoring solution, but hopefully it'll make people's lives a little bit easier. Does that make sense? That's what you said. All right, great. Sorry. I wasn't entirely sure but You get a sticker Everyone gets a sticker. Hopefully so so that's just something to think about so now I'm going to go into that third design point the Distributed system aspect. So here's I'm going to show you a few topologies and I want you to tell me a little bit about these So this is what kind of topology is this? Louder louder Centralized apology. It's typically called a client server topology So here you have a server and you have a whole bunch of clients that goes off and connect to the server make sense It's a perfectly good topology, right? This is a very very well understood topology Everyone's probably used this at some point very classical and what are some of the problems with this topology? What? bottlenecks be more specific Yeah, so overloading the server and It's a single point of failure for sure Let me look at a slightly different topology So this one this is What I call a central orchestrator This is actually again. It's a very good topology very well understood conceptual topology This is what Ansible for example uses so a central orchestrator is where you have a single thing that it initiates the connection sound Whole bunch of machines, so I'm one M2 M3. What are some of the problems with this topology? Single point of failure. Yeah, anything else Well, you said it last time master That could be a problem as well But so actually the problems here actually very similar with the client server topology So here's a topology which we're not using an MGMT Is it that funny? All right, there's your first audience to laugh at this pole. You should you should rightly so your sharp audience So this is where every peer connects to every other peer. What is the problem with this topology? Right exactly right so if you have you know even six machines You've got a whole lot of connections here and if you get to a hundred it just like I think we're out of ports at this point So this is not what we do an MGMT So we actually do is something like this so every machine or Entity in the clusters appear so P1 to whatever and what actually happens is we actually With this peer using a distributed algorithm called raft We actually elect some number so m1 2 3 and 4 in this case of machines to be temporary primary hosts Typically called master hosts sometimes And then those four or however many you've chosen all connect to each other and form a distributed Key value store which we use and everyone else connects to one of those as a client So that way everything can actually connect Directly via one of these primary hosts and the cool thing is I'm just gonna speed through here If you one of these dies, that's okay because we can automatically elect another one to take over it All right, so this is what we're actually gonna build Yep Absolutely, so a few things so some of the classical topologies and ways we're managing our data centers I don't think are entirely correct. First of all Number two you can definitely mark certain machines as never becoming a master so that everyone else can actually You know be those but not certain machines which are very sensitive But more particularly the bigger point is not that there's gonna be something special running on one pure When we think of having like the puppet master and this is a very special machine that has all of the keys And these other machines are less important or less sensitive. We don't actually treat it this way Everyone actually you should really think about it as each machine having its own individual algorithm to follow and so there's it's actually the sensitivity and the Problems associated with that are quite limited to just that one machine So We'll get to this in a little later demo, but we won't you don't have to always be connected, right? So that's number one and number two the P Machines actually only connect to one primary machine at a time so if you want you can have say four machines that are always on that everyone else connects to and Those other ones never make any other connections except for when you want them to do stuff But it turns out that having a constant ecd cluster Running is actually quite beneficial. So this is actually what we do the ecd code base Which implements the raft algorithm is actually what we run on these four machines? I'm gonna show you this in a moment anyone familiar with ecd It's just a very simple key value database that we can run. It's quite easy So I'm gonna I'm gonna tell you why we want these connections because why we actually want these connections is actually maybe not entirely obvious so I'm gonna Imagine a scenario where you have a whole bunch of machines Let's think of one of them as say a router and a whole bunch of a router or load balancer Let's say and you have a whole bunch of web servers behind this machine so what you typically want is when one of those machines boots up it connects to the Network and you would like it ideally to be able to tell the load balancer Hey, I'm alive. Please open a route to me or please open this port for me and so on So what we actually can do is actually exchange information by putting stuff into the database and pulling it out of the database So that that web server might say hey database I'm a web server and I'm here on this port in case anyone is interested and a router or a load balancer might be looking for certain patterns of Advertisement saying hey I'm looking for people to open ports to and when it sees those things in the database It'll pull those down and use those those firewall rules So to demonstrate this we're gonna actually just do a simple algorithm where we want everyone to know about everyone else in the cluster So each machine and I'm gonna run this on multiple machines It's gonna do the following each machine is just gonna create a file resource on itself So one file on itself the second thing the machine is gonna do is it's gonna put up one file into this distributed database and Then it's gonna look in the database and pull down everything in the database and put it on itself Okay, so I'll go over that one more time So one file on itself and one file up in the database and then it pulls down that copy. So we should have how many files On that first machine louder Yeah, so so initially if we just have one you're quite right We'll just have two and as we add new machines will get more and more. Let me show you this Visually and I think it will be Quite nice to show you Okay, we're just gonna make so I just have five directors here, which I've made one to represent each machine and just so that we can See this live Gonna run a watch on this so you can see what's happening. So what I'm gonna do I'm gonna run this first machine, okay? Same sort of thing Just see what happens. It's gonna start up and Boom you see right away. You see two files up here So what this machine does actually if it's the first machine in the cluster it'll actually say, okay I'm the first machine on the loan so it's gonna automatically elect itself as a primary master and start up the etsy server as well That's what some of that garbage logs flowing by is so it's actually running etsy D itself. Alright, so now I'm gonna add normally we just start up everything, but I'm gonna add the machines one at a time Just so you can actually see the operations go by So the first machine it put one file on itself one in the database and then pull down everything in the database So you see two files make sense Now I'm gonna add that second machine and it's gonna follow the same algorithm. So how many files are we gonna have on the second machine? Four is it four I think three so we'll just go over it so we have One file on ourself and we put a second file up in the database now There's two files up there right we pull both of those down so we have three Okay, but that first machine that we just ran it's constantly watching this database in real time So it now sees a second file up there and it says I'm gonna pull that down too So it will pull down that second file. So you'll see three on the first machine as well Is anyone not completely clear about this? Yes, okay, it's clear good. So let me show you and So the only thing we have to do for subsequent machines is we point them We point every new machine at any machine in the existing cluster and that way it'll connect as a client There's a little distributed algorithm it uses and then it will hopefully get promoted as a as a master and we'll run this so Run this starts up boom you see Actually my machine machine is a little slow today But basically within a second or so you see now there's three files on the second machine and a third file appeared on that first machine Make sense Let's add a third machine to the cluster. What's gonna happen now? How many files will we get? For right so do that here just to show you oops same thing starts up runs right away for and then Within a second or so this happens. Yes, sir. You can yes you can bring I mean I don't have enough resources to test hundreds at the same time But you can just start them all up and it'll in theory work No So the the so finding that there actually is a race condition in my code And this is just because I've been too lazy to fix something in that CD But module of that race condition aside the algorithm is actually very safe because it uses this algorithm called raft Which makes these decisions? So I think I understand your question so there's no guarantee in the raft algorithm that That you will always be up right so cap theorem basically says CAP you have consistency availability and partial partition tolerance, so we are picking a CP system So we guarantee that the system is always consistent and we guarantee that it can recover from partitions There's no guarantee that you get a hundred percent availability so if you kill machines or they go on fire at a rate higher than the rate that you're bringing up new ones eventually you'll You'll fail But this isn't a harmful failure usually I mean similarly you cannot guarantee that if you start up 10,000 machines at the same time They won't cause a temporary denial of service and have machines waiting to join So these are not solutions that are possible in real life So we don't have any magic like faster than like travel here, but we do have a solid algorithm, which let's us build this Yeah It's good question. Let's talk about it a little bit more after I want to finish a bit more stuff If you're still lost or have some more questions, I'll definitely still be around So just to show you so we've got our three machines running. They've done this exchange And potentially watching for more things. So just to show you that it's actually oops So I just actually can actually talk to any machine in the cluster and ask them who's a member in the cluster and Here you can see that there's three machines They're actually that have become those primary masters in the cluster because I said You know make as many masters as you want at the moment, but I can actually Tell it Specifically I want there to be up to three primary machines in this cluster And I can just tell it that and it will say all right fine if that's what you want James Please do that and just to show you now. I'm gonna open up a Force machine in the cluster. Okay, same thing. How many files should we see on each machine now? Five so we're gonna start it up. That happens very quickly And now if we go back and query the number of members in the cluster you can see that there's still Just three members as masters h1 h2 and h3 But the cool thing is now if I change my mind and say yeah I want there to be up to five machines being masters in the cluster you go here and you'll see that it automatically As a cluster elected one of the unused machines promoted it to master and we carry on operations as usual, okay? I don't want to spend too too much time on this But if there's any quick questions, I'll take them because we have the time today anybody So this fundamental pattern of putting data into a central store or a distributed store And then other people watching for certain patterns of data is actually extremely extremely powerful It might not be clear as to all the reasons why this is useful, but it turns out it is actually very useful It's actually very very similar to the exported resources pattern in puppet So that was a part of puppet that it's actually not used very much And it was really really poorly implemented because every time you want to put something in it took a whole 30 minute puppet run Or every 30 minutes to run and go through and then the other machine would have to run in response So if you had n machines that you wanted to exchange data with you typically needed to n minus one runs to converge Which is just forever so Right times 30 minutes so to n minus one times 30. It's just a big number for small clusters. So So that's that so I'm just gonna shut this down and Just I'm gonna just shut it down quickly because I don't care about anything all right, so I Have some really shitty slides with like drawings if you are an artist and can make these drawings better for me Look at this. It's a terrible. Please send your send your drawings so a few more Features that I want to talk about this stuff. I showed you I'm gonna show you a remote execution example. So For this I need to Go over here So this is this great project this guy started called. Oh my vagrant. I don't know this so cool It's like the best way to sit to use vagrant It's a cheap plug because I wrote this I don't really work on this anymore, but it still works for me So what I'm gonna do now is actually just start up two vagrant VMs on my host just so that you can See what we're gonna do here Okay so and so what we're gonna do so So we have this distributed system which I think is is what you would probably use in normal operation But there's still a few use cases which we haven't built with the system And I realized they were actually the same use case or the same architecture And I'll explain in a moment. So imagine that you have a brand new data center Okay, so your data center goes on fire or someone buys you a brand new data center I know this happens a lot and but in theory if you are like the best to submit and you have the best DevOps team you should in theory be able to take your laptop go into the data center and Build out the entire data center just from your laptop does that make sense in theory whose data center can do this Like full disaster recovery automation. We have some liars in the back To fundamentally do this you would expect that what you would want to do is I'm just gonna start the second machine My laptop is really old and slow. So I want to give it just time to boot up two VMs, which is a lot of work So you typically want to have some laptop Which would sort of initially start off as an orchestrator to initiate some actions And then once there's enough stuff up and running so the initial DNS DHCP Kickstart servers and so on have those take over and build the rest of your data center out Right another use case for orchestrators. Are you videoing me around a photographer? Was I doing like some little weird face? The the other use case is let's say you wanted to manage something remotely And you wanted to sort of cause some sort of semi orchestration operation to happen over a slow link You're at a coffee shop and you want to push something in so so it turns out I actually realized that we could build an orchestrator into MGMT that works quite easily So what I'm gonna do over here almost done starting up I think So I'm gonna I'm gonna set up two VMs, which I'm doing right now And we're actually gonna do the same operation as we did before with the file exchange So what MGMT does is a runs on your laptop and you can ask it to connect over SSH to as many machines as you want So it's gonna connect over SSH again in parallel as well But what it does which is different than other tools is it actually tunnels through the SSH connections back to The initial host which runs at CD so in addition to running over SSH those machines can still communicate over at CD Through that SSH tunnel and exchange data Make sense or completely lost Scream it out. I'm here for you guys. So they don't pay me extra if I answer more or less questions So just let me know Nothing just SSH and like g-libs II. I think that's required. Well, the kernel has to be running. I need a kernel, but yeah, nothing else Yep, I will show you and it's a great question and I will show you in one sec So what is happening so we've got the two remote machines Yeah, and we have the initiator down here at the bottom this initiator is going to spin up a temporary LCD cluster for that database to occur and then these two are both gonna run the client and potentially join as peers as well But instead of they can't connect to each other because we just have this SSH connection So they actually tunnel the LCD traffic back through here as clients So there will be you know the back and forth heartbeat and stuff through SSH while it's running And so then they can actually run in individually but actually exchange data via at CD and converge as a cluster Okay, and in particular they can exchange runtime information So when SSH runs it initially connects over SSH it copies the mgmt binary over just going It's a single binary it runs it and then performs all the operations Okay, so let me show you this Oops, so I just have to I don't have DNS set up here. I just have to pick the IP addresses Say hate DNS so So I'm gonna go here on to the first machine And you can see there's just a few files in this temp directory, but nothing special and over here I have the two files The the mgmt is on the left and one the first machine is on the right I just log into the first one so you can see it So when I run it's going to do that thing that file exchange one file on itself one in the database But for both machines at the same time, so how many files should I see on the end? How many I heard three right? So there's two machines exchanging so each puts a file on itself They each put a file up in the database and they pull down a copy of that file each so three files each I'm gonna run this and let's see how long it takes you ready Are you guys ready? All right. Thank you. So it starts up it runs. It's copying the binary over it's done It's running mgmt and the whole thing is already finished. Okay, so it ran on each one It connected over SSH it copied the binaries. They started running. They put stuff in at CD They pulled each other's at CD things back and now you have three files on Each machine make sense So just want to show you something so I'm going to watch here These three files happening so it's basically you have one file exported from each host A and B and that initial file So now mgmt as you know, it's still running continuously here. They didn't ask it to shut down. So what I'm gonna do now I'm going to connect I'm going to change the configuration of the second machine Okay, so here we're looking at the first machine, but I'm going to change the configuration of the second machine so I'm going to go over here and I'm going to change this file to say Hey devcon 2017 that sound good So I'm going to get ready to save but I'm not going to save yet So here's I want you to think what happens when I save so when I save the configuration Assuming there's no bugs in my code It's going to save mgmt the engine is going to notice that the configuration changed It's going to push the configuration to both machines. They're going to notice that there's new configuration It's going to run those in this case It's going to export that file up into the database The first machine is going to see that the database changed and then pull it down and update the local copy So this here is that first machine and we're watching that file change to see if it changes just pulling it All right, I want you to just see how fast this actually happens. Yes Sorry Yeah, so I have to VM started a and b So I'm actually going to I'm editing the configuration of the B machine But I'm watching here on this side a machine Okay, so when I save the configuration on the B machine, it'll push that to be B will export the file refresh it in the database, which the a machine is watching all through SSH and Then hopefully you'll see it update as quick as it takes Great question the configuration is on the local orchestrator and the local orchestrator Everything's event-based so even when the configuration changes it will say oh, I'm going to give everyone the new configuration Make sense The orchestrator is where the initial where the code starts off as so we're not editing the configuration on the machine We're editing it on the the orchestrator Yes one more question Not exactly so this is a bit of a So eventually there will be a language that describes the graphs on each machine But at the moment we actually have to just build the raw graphs manually And so as a result we have two raw graphs one for a one for B both those are all graphs exist on this initiator And I'm going to edit the second the B graph which will get pushed over to be and run The B graph will run on B All the database The orchestrator is not doing anything it could be as well it could also do stuff on itself But for this example, it's not all right. Any last questions before I run this No, so so I'm just going to press enter and you'll see how fast it changes Done. All right So if you think about this what am I actually doing so I'm connected to these machines as an orchestrator And I'm actually just live hacking on my config and my cluster is in real time doing this work So I don't really recommend you do this on a production cluster, but if you were doing sort of a dev environment You just change your config press save and the whole cluster will respond and take you to that new state. Hopefully all right You like that Any questions sorry The specific the specific thing that it does work everything happens as normal it's normal config management with events So everything that needs to be performed will happen So stop you right there so the goal is Probably what you want to do is you run this initially to bootstrap your cluster and then once it's bootstrap it and Mgmt takes over you disconnect and let it continue off and have a life of its own if not you can just shut down and so on and Currently it's just over ssh. It's running. So just to show you so I'm just gonna close this here I can actually shut down mgmt. Okay, and everything is Stopped running. This is on my orchestrator box on the left Okay, but just to show you also we can also even start up run the orchestrator and converge together as a cluster So just to show you I'll go here. I can even add this converged Timeout option equals five seconds and start up Goes through copies the binaries over It checks that everything's in the correct state five seconds go by on the whole cluster of nothing happening And it shuts down so nine seconds later the whole thing is converged as a cluster Okay, so this is actually not what I expect people to use as the primary use of Mgmt But the orchestrator concept built-in is actually the same architecture when your fundamental Architecture is a distributed system Make sense Yes, no Okay, I'm gonna go on cool so and then nice thing you've ever done this to machines Is that what's gonna happen? Do you know? Seems to be working and play with that at home. Don't do this on your local machine So she can get rid of that So I showed you that So one other little thing so So this was basically the topology that I showed you have this in its initiator It runs at CD goes to bunch of machines and we actually don't have all of the code in place for this But if someone wants to finish it up, I think it's fairly straightforward. We could actually have something called a hierarchical Execution so the first machine connects to say 10 different machines or 20 different machines And then from there each of those participates and connects to a further 10 more each whole thing It converts together as a cluster This would be maybe something would be useful for very very rapid bootstrapping of entire data centers or something like that The other thing this would be very useful for say you really like this orchestrator topology and you're in a coffee shop You could be running on your laptop on a slow internet link to your data center so you as your Remote execution runs you run that first link to the data center and then from there you play out to You know your 10 machines or whatever so that that fast link In the data center does most of the work for you Well, no, but you could do it on the the second hop for example, yeah, or or you could tunnel all the way through I mean all these apologies are are fairly flexible. So This doesn't exist in get master just I So it is it is a very fair question Absolutely, it is a very fair question and people actually always ask this question like with puppet I could always control see it because like it took forever to run So it was safer right and I think this is nonsense, right? Yeah, yeah, so that you can So it's a really it's a legitimate point So I'm not trying to undermine your point your point is correct, but here's the way you solve it So there's a few things so first we have a no op mode So if you want to run but not actually perform the operations the no op mode can exist per resource So it's and noop per resource or actually just globally for the whole thing So You'll see the consequences You can't you can't deterministically see all of the consequences so but safety is a really important thing So the bigger thing that we're doing for safety is that the way you describe the graphs and the system that get run on your machine Is currently not in place and get master yet We're building actually a language a DSL to specify this and the language is going to be an extremely safe language There's no guarantee that you won't break things, but it will hopefully reduce the chance that you break something So off by one errors Variable redecoration things like that just will be compile time errors Yeah, so the fact that I'm using this yaml description is just literally the raw data structure of the graph because we haven't built The language yet so this tool is new and we just we're not finished quite honestly So I'm only showing you how we have so far Hopefully in a year will have the language built by then and next year if I come back I can show you the actual language, but it just doesn't exist yet and on the language node Actually, we're really looking for people to help with this. So if you can help with the code I mean really really really be appreciated. So There's a Design for the off and how this all works I'm not going to talk about this today Primarily because only about 10% of it isn't implemented yet. So I have a really good design and happy to talk about it But oh, yeah Talk to me after about it and or in the end when we have some discussion So so I have a few more demos and things that I can show you unless you're fed up of hearing me In which place you can leave you want to see more or don't leave Let's see more Need some consensus remember raft algorithms. Yes. Okay, so I'll show you a few other things that are part of the tool Just to show you the sort of stuff for building. So Everyone who's ridden some puppet remembers the sort of thing where you write package file service, right? The package has to be installed first then you start to serve you set up the config file and then you start to service, right? So you always have to add the dependencies package before service So we built something in MGMT, which hopefully will make your lives a little bit easier. So I Have this automatic edges API and here I'm going to do the following. I have a DRBD package I have some config file at CDRBD.conf. I have a DRBD config directory and I have the service the DRBD service. Okay, it doesn't really matter what this thing does. It's just random example and I'm just going to Run this and just to show you what it does I'm actually running this in no op mode. So I'll actually install these things I don't want the DRBD on my machine what I do want to show you though is if you look right here. You can see the compiler Added some auto edges. So these dependencies So it basically looked at the graph and it said the package must happen before the service Which is to write you want the package installed before you start the service and also the package has to be installed Before we set up these two config files and it determined this completely automatically for you So basically what we're trying to do is add things that take away work from you So you don't have to write those extra dependencies and by adding more constraints automatically It makes it safer to when you run your graphs Okay, so I didn't even ask the question yet and people how do you Like your enthusiasm. Thank you. So the question is how did we get this information? anybody Yes There is an edge missing from file to service we'll talk about great great comment that was gonna be my follow-up question, but You beat me to it. So first the first question is how did we get this information automatically? Yes? I did not define it in the configuration. Okay, I'll show you you're wrong Oh So what I did I did not specify any Relationships between the package and the file and the service the only thing I did is I said please enable the auto-edge stuff Right, so I could say I don't want automatic edges. Please turn that off So I asked it please try and find automatic edges, but I did not describe what those edges were How did we get this information automatically? From the package metadata, yeah, I don't know why systems weren't doing this before so what we actually do is we have You know rpm that has the information of what files could be there So if it sees a dot service file It knows that if that service that corresponds that dot service file exists make sure the package is installed before the service Similarly for config files now someone quietly rec quite rightly pointed out that There's no link between the service and the package and this is actually currently Excuse me the package and the config Sorry between the service and the config and this is currently just not possible So I actually have a proposal to the system d project actually for this so if the system d service could actually Have a list of which config files go with that service This is actually useful for a few reasons one we could query it and add those automatic edges for you and Additionally when you write a system d service my god system ctl status service name You can actually see if the m time of the service has not been restarted since the config file was System d could tell you hey you might consider reloading your service. It might be out of date And this actually exists as a proposal in system d Yeah Yeah, I understand what you're saying or I think I do but the So the the greater systems graph information that's something that's config management specific I'm not recommending you put that into the system I'm actually asking system d to just tell me one very simple thing in the service file These are configurations files that relate to your service It's true that a web of things that might need restarting in a certain order would have to be something more complex that you model in Mgmt perhaps but just the simple is this particular service based on its config files out of date that itself will just give us quite a lot of Information which will benefit both mgmt and existing tools which want to know if that's the case Yes, yes Indeed so Mgmt will add edges if you want and you can disable that per resource if you don't want them to participate in addition You can add additional edges if you want Additional rules for automatic edge Yeah Yes Absolutely. Yes. Yes, of course. Of course. Sorry You definitely can specify different rules and ways to generate the automatic edges completely absolutely There is what we call the auto edges API and each resource has code in the native resource That decides how to figure out what edges are supported So in this particular case we generate the edges that you've seen But each resource can have its own way of generating different kinds of edges Absolutely and in fact if you know of a way to add edges between two different resource types But I haven't thought of please send us a patch or at least the idea because definitely like to support that. Yes So, I mean the edges restrict parallelism, right? So ideally if we had a beautiful system that everything could run in parallel It would just run parallel, but that's not the case in practice. So everything that doesn't have limitations and restrictions We'll keep running in parallel So, I mean it's up to you how you want your graph to look So anyways if there are any system D people that want to help write this patch I talked to Leonard about it. He likes the idea a lot. He has some further ideas. I think that's what he said He was like James. This is the best idea ever He also told me to send the patch and I was like You don't want my C code in system D But the point is you guys are better C hackers than me and if you do want to help with this Please it would be really helpful This is a we won't skip this, but this this is something you're probably curious about Mr. Stephen So here's a graph. This is just an arbitrary graph, and I want to show you something about it So we have three blue resources, which are packages. We have Two red resource with your files. We have a service resource in green And we have Christoff in the back taking photos Yeah, yeah, that's fine send me a copy Okay, cool. I'm not on right now, but I'll go back. So anyway, it's just back to this graph. So we have this graph It's the prettiest drawing I could make sorry. What is the problem with running this particular graph? Besides Steven is there a problem with this particular graph? All right, everybody Anybody including Steven? That's okay. You do it's true that we have two disconnected portions But the engine will reach all these things right it has to start at the You know it'll probably start running say here on this service here and this guy here And then run this and this and this one will start also, but there's a there's a problem It's not a major problem. It's just something that could be better with this graph so The package manager walk is going to limit us from running part of this in parallel I because we fundamentally cannot install power top at the same time as cow say So here's what we actually do is solve this problem We actually solve this generally for all resources, but it's most effective for package resources So what we do is if we create this graph it'll go into the MGMT engine the MGMT engine will do an analysis of the graph And it'll actually transform the graph into this graph They're actually fundamental fundamentally the same the difference is it looked if it could collect certain resources That were on different resources and group them together into a single operation So if you were to run this graph in say puppet It would actually go through and do yum install first thing finish young install second thing finish them install third thing But because we have grouping enabled it will actually group these three into a single operation Which is much much much faster. I see some faces here. It doesn't like this idea Yes Okay, yeah, so just to be clear maybe I didn't make this entirely clear you this is not required that you must group Everything together by default it will try and group things that make sense to be grouped together But you can disable it per resource So if you want if you had something like package a dependency package B It will not group these two because there's an explicit dependency. So it respects the dependencies additionally Pardon me if you had package a package B and then some other resource and you said do not group these it won't do that So it's an optional feature, but we think it's helpful for a lot of scenarios and you have a question again Well, let's how about I show you this first Well demo first demo first Wait, wait, wait, let me let me let me show you a demo first Demo first more fun So so we'll go over here. Okay, Cal say okay, Cal say is working So I said we're gonna solve those three packages. We'll just run it actually first okay, and If you see it's not doesn't show up really beautifully, but I actually noticed the three separate packages and Install them all together But just to return to the question from the beginning where the gentleman asks what if you remove Your package Just type my password, which is password Okay, and remove Cal say you'll see on the left See Cal say Cal say is not installed It's noticing that it's not installed installing it. Hey, oh not yet and So the internet is the internet working So when the internet finishes working this will get installed Yeah, so the internet is not working, but when the internet works this works Mmm At the moment there's no event for that but patches are definitely welcome. So what will actually happen is the resource Will be seen as failed and there's mechanisms to either retry and do things like that which you can specify so So no So by default When you have one failure it just finishes it just exits just so you can detect problems But there's retry stuff and metaparameters to that's why later. So we'll It's retrying Yeah Internet is not working. I'm probably behind this stupid firewall thing and they're like Working if it works now Yeah, so that's the basic idea so modular internet difficulties if the package gets removed it'll get Re-added and the grouping happens automatically as well back. Yeah, there we go. So it came back Quick little bonus case. You don't know you can actually even Cal say Cal say case you ever want Cal says and Cal says and you can nest these Sorry, so there were a few questions and you had a big comment. So observation Yes Much faster Yeah Adding a serialization assuming That will be faster But it will not always be the case. I'm wondering Is it the right thing to do this grouping in advance and to make it a fine part of here Or is it better to say I've identified a convergence of the same module and therefore I will patch But I will still then patch up the next iteration of The same module of the separate graph entry after that and I try to optimize I Absolutely So you're exactly correct. So this definitely reduces parallelism But it's important to remember a few things first of all the status quo of all tools currently is zero parallelism So this is always always better than the status quo in particular because even the status quo doesn't even group packages So you'll have serialized single pack of package operation, which is the worst-case scenario Second of all You can definitely disable this again per resource. Okay, so if you don't want the really really big resource to Happen grouping with everything else like let's say you have a really big operation that depends on a package And that takes a long time. You might write your code Or have an algorithm to optimize for this where it installs that first and then as a Dependency installs the second package with groups with everything else so that you could branch off very early and do that parallelism and lastly because I knew Someone was bugging about this when I was coding the algorithm that it uses to group the graphs is actually completely Plugable so I wrote a very naive implementation that tries to group as much as possible But it's completely a pluggable part of the code So if someone implements a different grouping algorithm like you because you're probably smart enough to do so and I'm not This could be just like a flag that says use the you know cheap grouping algorithm or the other one. So No No, no This is not hard-coded at all the grouping actually happened at runtime. So it's a live process that can During the execution of the graph Yeah, I Think that It's probably interesting what you're talking about But I think it would be a lot of work for very very little benefit. So in practice. Yeah in practice I don't think it will help you a lot I think The just the major speed up just from the simple optimization is what most people just want and just to actually show you some Some data on this. I actually did some data Took some data from existing systems. So the red bars are puppet the blue bars the tinier ones are MGMT Yellow is actually package kit running natively and green is DNF itself Just running and this is execution time. So at the left I have a single package runs and the right I have multiple packages all grouped together and the last bars are Three runs three packages, excuse me. So by the time we're installing three packages Puppet tricks up so much overhead just starting up and shutting down that it takes two or three times longer Then just MGMT which does the grouping so you already get a pretty good speed up Just with simple grouping and I think that's probably pretty good for most situations All of the pieces after that can be extremely fast Maybe that means it's irrelevant You can only take it so far I'm sorry So just let you know so this is for packages, but the grouping algorithms Applied to any resource type that supports grouping. So for example, if you have a resource that makes a connection to some host And you could group those connections in the API that sort of thing would be helpful as well File resources don't support this yet, but you can actually group file resources to save I notify watches and other sorts of things as well Nope, you are out of your quota you lost your quote Ask me after Against the unremove. What is the unremove? Yeah, I showed you this actually So package kit does this so package kit watches if yum does an operation Does it has a watch on the RPM DB and so if the RPM DB changes package kit gets a signal and package kid notifies me Yep, it actually does work. There actually was one bug where they were missing one signal, but it's been reported I think it's patched by now if it's not it's definitely listed in our tracker. Yep I thought of everything buddy Not everything not everything, but the biggest thing we need people helping so I'm going to skip this demo this for now. I want to show you I'm gonna show you another demo. You want to see another demo or are you fed up? Okay So I told you that I have a Virtual machine resource So what I'm going to do I'm just going to run MGMT on the left and you're going to see Verse on the on the right. So I just start this up Again, and you can see basically I've asked it I've declared the state of a VM to be up and running so please start up and run this VM Same sort of thing applies as before with the package and verse destroy MGMT 3 and you can list it and it comes right back up So if I want to actually have a VM running I say please run It boots it up. I can destroy it. It'll come right back up. I can even undefine it Change properties and the same sort of thing happens And so this is one of the resources which is a little bit more powerful So when you start of thinking of MGMT because we have this event system We can have higher level resources that are more powerful. There's some questions ask gentlemen question No, it's okay. Tell us. No Gentleman back. I've provoked discussion. So So what we actually can do I Have this cockpit branch so at some point So at at some point I wanted to think about What the higher level abstraction looks like for the end users, right? So imagine you have all these modules now that can do different things on your system You do that and you have your whole data center all beautifully Configured then what you could do is create what I would call say a meta module the groups together different first-stage modules and that module itself Would have a higher level interface to describes say your your architecture your cluster and And that some people might want to expose graphically with a number of parameters or you could be in git But it could be graphically too if you want the sort of thing you might see in there is you might have the ruffles account for the number of copies of data in your storage cluster or Some load scaling factor So if Michael Jackson is about to die you would sort of slide this up and ramp up just before he dies Not that you would kill Michael Jackson, but those sort of things and to expose this graphically There's a great project that actually like called cockpit a bunch of red hat people work on it And I did a little proof of concept of how this would work I'm just going to show you a little demo. So I wrote a little module and It's right here. So what I'm going to do here I have cockpit running All right And I'm just going to run MGMT over here Just Clean up this old mess. So MGMT is going to run and Then over here you won't see MGMT running But I am going to show you verse Which is just a little thing that lists the virtual machines running on the machine So that's running and the cool thing is I have this slider here. I set this to say four And I click save and you'll see that it will boot up up to four machines. I move it down to two Click save it puts it down to do bring it down to one Click save and so on bring it back up to say three Click save. So whatever happens Cockpit will actually send events to MGMT and it can detect the state as well Sort of give you sort of a responsive interface to what's happening now Here's what's actually happening every time I click save every time I click save Cockpit generates a new graph or pushes new data that goes into a graph Running on MGMT MGMT has a current graph running and this new graph running it does a diff of the two graphs and sees what has changed and Accordingly to the dependencies so that it knows only what it needs to update So it doesn't have to recheck the whole graph every time it then switches the graphs very very quickly And then does the work to set that new graph up. So this actually happens extremely efficiently. It happens so efficiently This is kind of like multiple puppet runs if you want to think have been so efficient You can actually do this multiple times per second. So I have a little live checkbox here I can click it and you'll see that now as I just slide the slider You'll see that MGMT is constantly running noticing these changes and reacting in real-time and Setting up whatever you want And we have more questions. Yes. Oh, so this is because I am shitty at JavaScript programming so the number is just the percentage in Decimal of how far the bar is across and Number the integer at the bottom is the basically the I forget if I use seal or something like that just to pick an integer across So if someone could make a a control that would go like tick tick tick and have a little integer marks I would be really dope but in the meantime. I just have this so if there's anybody from the cockpit project here You can make this better Yes So great question so at the moment it's mostly one way but we can make it fully two way if we want It's not implemented. I'm not sure if we want this, but it's to be Requires more JavaScript code and I got the second writing JavaScript Yeah, yeah, and that's one of the reasons why we chose cockpit because of that Doesn't and that's the thing so the Some people and yeah, some people in this config management space and automation dev ops space have insisted like Adamantly that what I'm doing is not config management. It's what they want to call choreography Right, so it's where you have you know, there's all these dancers that independently all work together But I actually Personally see it the other way I see as what traditional config management has been was actually a little bit broken and not properly designed So I think that I'm finally trying to build what config management should have been But whether you call it choreography or whether you call it config management, it's really up to you It's my preference because I built the thing but you can talk about it as you want as long as we agree on the concepts Any more questions from the back Great question, I'm gonna get to that in like two minutes. So hold it for one second any other quick questions on this You like this Anyway, so that's just the idea So this is just a very simple thing that has just one slider and one parameter were changing But you can imagine a whole bunch of different things that were visible this way Yeah, or this could just be a get and you just edit things save it to get and then you refresh it Yeah So Libvert supports rather Yeah, it's using liver and it has a whole pull out of events and so on so what I don't have ready But I'm building is I'm actually building into the vert resource Parameter for the number of CPUs and the amount of RAM So in a future demo, I will slide up the number of CPUs on the VMs and change the number of Mount of RAM and live vert can actually hot plug more CPUs and hot plug more RAM So this will be the cooler demo, but it's not ready yet So if you work on live vert and you want to help me finish this code, please let me know It's a bit tricky, but it's it's gonna work and the really cool thing is the code the logic is per resource So you don't have to have a complicated thing that knows about much more than just the resource specific stuff And all of the common code in the MGMT engine ties things together and does all the magic work Let's see what else we got so just to There's some other additional demos, but I think it's been quite a long time So I'm just gonna like give a little quick summary and finish up and if people are really fed up you can leave And if you really really desperately want to see a few more things that are more advanced topics Then I'll show a bit more for people over here. So again, there's still a lot of things that are not finished yet We don't have the DSL language yet We have a design or most of it designed for a language, but I'm just really shit at computer science and like lexers and parsers And I'm really the only one working on this full-time Budgets are hard and getting people involved to use something that's not ready for production yet is really really hard So if you can help with the language, we really would appreciate your help We need more resources. So we have a few basic resources But by we're not finished at all and some of the resources we have don't have many options yet So we have a file resource, but it doesn't set the mode on files yet So that's a really simple one-hour patch if someone wants to contribute it. I think I told you about that We have a container resource based on N spawn. It's actually kind of broken, but the same idea We can finish building these out so on There's some Etsy D things we need to Work on and some other stuff that I'm not gonna talk about today. I Showed you the vert stuff. I showed you cockpit one other quick thing. I want to mention so the whole project is written in Golang and One of the cool things about MGMT is it's actually implemented as a library Which has a you know, very small like CLI parser in front of it So if you wanted to use this in an existing piece of software, you can actually use it and import it as a library So when you look at software, whether it's Gluster D or free IPA or pulp or all these different systems They often come with a management interface that has a something D excuse me and What it typically does is it deals with cluster membership and all of this coordination of all the different peers in your cluster and Doing all the work that they do but instead of doing that and reinventing that often wrong each time you build new software Instead use our library which does all this for you automatically and the user only know they're running MGMT They'll just have this robust graph engine that does all these operations for you. So That's coming. I have some demos, but I'm not going to show that right now This I won't show either, but it's pretty cool. I'd like to talk about if anyone stays around I'll tell you about this and these things are all blog posts so you can check that out the language is is coming up and We have some cool graph finite state machine stuff. That's not finished yet, but that says Getting pretty close. So if you want to talk about it, let me know I've actually recently switched teams at Red Hat. I'm part of the storage engineering team Since about a month or so The goal is basically to apply this tech to Gluster D And then potentially stuff one day because they really want to improve the management story for these projects So Yeah, this is what's coming if you like this idea and you want to help out Please let me know and we can tell you what we're working on. So again, this is about you guys, right? I mean, I'm really here to help you guys Get involved if you don't help and get involved. It'll just make me sad. So how can you help? All sorts of things you can use this test it patch it. Oh my god The testing is so bad in MGMT right now. I just had no time share it with your friends document it Star it on you you have a blog you want a blog about it if you have Twitter you have Twitter And tweet about it discuss it with your friends and just hack on it hack stuff I have one marketing slide because redhat was kind enough to pay to send me here Well, they haven't paid yet, but hopefully they will and So if you want shit give them money because then they keep paying me, which is nice And just to make it clear. So this is an upstream community project. It's not a product So you can't buy this yet, but if it becomes more robust, maybe there'll be a product offering one day I don't know but Time will tell so let's just recap The video is kind of buggy got to fix that but it's Arthur Benjamin just recapping his pen I don't know if you've seen this talk doesn't matter So here's a few links Technical blog of James now you all know about it. So put in your RSS feeders The project page is on github purple idea slash MGMT There's about six blog posts. I've written On my blog that actually talk about some of the things I talked about today in more detail Not everything was covered, but most of it was and I'm purple idea on IRC and on free node and you can email me if you really want So that's most of the presentation One thing you need to do me one quick favor if you like this presentation There is a feedback form on the website You have to click on like the workshops and then day two and you will find my talk and send feedback Please do this it would help a lot Even if it's shit if you like fuck this guy Can I say that sorry Whatever just give some feedback and it will tell the organizers if you thought I did a good job or a bad job or you loved it or whatever I think you need a Google account this year to give feedback, which is really lame So yeah, that's lame, but yeah, do it give Google give them your Google info and Just if you want to hang out and talk about the stuff we have three ways to get involved We have on free node IRC. We have MGMT config IRC channel Come hang out come work with us. We sometimes it's quiet and sometimes we have really cool Conversations like we're having so it's kind of I find it interesting We have Twitter for some reason now and we have actually a new mailing list I tried everywhere to find a Some hosting and red hat seems to have the best free mailing list hosting. That's not like proprietary Nonsense mailing list. So please subscribe. It's brand new really really low volume But I'll start putting announcements and discussions on there. So if you want emails There you go Any more questions? That's it. Yes You have a lot of questions. So let's do I think so too Gee, thanks, I give you like 10 live demos and seems to work Great stick around we'll have a more informal chat any other quick questions just sort of more formally talk. Yes Peter great question Just to repeat the question Do you have to have additional resources compiled into the go-biner or can be can they be separate at the moment? We currently only support native resources in the binary go 1.8 actually supports an LD pre a DL open module loading So I expect that in the future you'll actually be able to have a standalone resource that you compile and it pulls it in you know via Dot D directory for example so that currently doesn't exist if that code in particular you want to work on that's actually probably fairly straightforward code and Hopefully be supported in the future Any other quick questions? I'm gonna hang around if people want to Talk a bit informally And I feel a few more demos for more higher level stuff if you really want to get involved so other than that Thank you so much Yeah, yeah Where's my applause this is like a plot slide I? Just don't want to keep anyone hostage doesn't want to stay But if you do want a sticker, and you're gonna use it, please come get a sticker Or if you're gonna stick around you can stick around, but if you're on the way out come get a sticker Yes questions you had let's start him off one you first There's what Yeah Right so the the great question so at the moment the YAML file is The raw data structure of the graph that runs on each machine When this is a little bit more fit when the language is finished The way it will work is you will write code in this language and the language will compile and produce Potentially a different graph per machine it could be that you get the identical graph But you might have say a host name variable that changes things a little bit per machine or different roles and so on So you'll have one code base that when runs on specific machines produces different graphs Yeah Based on your code You would have the variable for example that does that for different roles. It's how you code it is up to you You're really waving your hands a lot So we're not doing a YAML the YAML thing is just the programmers raw data structure it's going to disappear in the future Yeah So what'll happen is you'll write code which will have kind of similar to puppet code actually in a lot of ways not entirely But the the main idea which will run on specific machines and you're compiled and produce output Basically a raw graph that is specific to them. So the the artifact that the engine runs is a graph Any other questions discussion Oh You send me patches and I give you stickers Maybe that's just a trade maybe not trade off Wait stop stop stop stop stop stop stop stop wait you can continue a second. This is not a competitor to Ansible Let's make that very clear. Okay To extend easily I know exactly where you're going so I like having a portable SAMBA module right that works across I It is a truly a great question it took me longer than an hour of me talking to realize this So props to that So here's the fundamental problem in all the existing tools puppet chef Ansible et cetera CF engine There's always this problem where you're like well Is this the good module or is that the good module or this is the blessed module? But I'm missing this feature or it can't do this or it doesn't work the way I want and it's just really shitty Right, like we don't have I guess we do have 10 g libsies But but you generally have like you should be able to write code that works for most people, right? We should be picking between web server a and web server B not between different modules for web server a and so here's Here's the fundamental problem. I have an article in my blog about this But I think that the problem was that the resource primitives built into say puppets or shaft and so on Were the wrong kind of powerful? They didn't have the power and the event system which was needed to build the right kind of resources. So if you have Our event system and our resource API I truly believe that we'll be able and and the language which is going to have some some important features relating to this I truly believe we'll actually be able to come together and build a definitive module per thing that will actually solve the problem and Not only solve the problem, but so well that it would make sense that that module even comes with the upstream project So instead of having module Authors be out of you know, the Apache project or the nginx project or whatever we instead have that part of the upstream and Eventually over time people might instead not have Etsy Sort of random data file formats But instead have MGMT as maybe even a de facto way to configure their software All right You can give people a structured approach to configuring their software instead of having to constantly wrap things and weird Etsy file formats Now, I don't expect to become a de facto standard here, but I think moving that direction is much more Possible once we have powerful resources and the right kind of powerful safe powerful resources Matrol a macro Okay, a macro my Service specification isn't as portable to a target config file then I can reuse the config file resource And my overhead is lower if I have to make a new sandbag resource from scratch That's a big much higher barrier Well, so I'm wondering yeah one of the possibilities to marry the flexibility on by combining existing resource Yeah, I do completely agree so that's why we're gonna hopefully have a lot of primitives that make sense But that aren't too encompassing. So yes, I probably wouldn't create a samba primitive But I would probably create in a file primitive and these other primitives The other thing you can do is you might in the end want to actually create maybe a samba module That's actually composed of much of primitives So it might seem like a lot of work because it's a whole new resource But it really is just a composition of other resources So you'll be able to compose resources in the language and actually already in the native Golang resources you can actually create composite resources already Yeah The way that we were going to do Multidistro compatibility is actually I wrote an article about this during my puppet time I have a pretty clean design for how this works I've done in puppet and I can post you to the blog post later. I Realistically plan on doing almost the same thing Yep, so I've done it with puppet and it will Be similar here. So certain modules that recognize other modules So if the purple idea something module or the authoritative something module Recognizes that there's also the something or other module. They might be able to exchange data between them This will be something that happens very simple So for example a database that wants to be Kerberized that notices a Kerberos module that it supports will be able to automatically Kerberize itself if you want it. So that's something that will be done Yeah, so you're you've mentioned twice this sort of second stage sort of figuring out stuff I think we should talk about this more offline because I don't know that there's a second step that really is necessary But let's talk about that a little bit more offline any other comments questions People want to discuss stuff? So the way that secrets management will work in MGMT and the way trust and authentication will work I've not described for MGMT. I have some work. I did on secrets management and how I believe Management should be done. There's even a recording of it online If you find my blog post you should be able to find it if not pay me I'll send you a link to it I plan on using that design, but in MGMT. I think it's quite nice, but totally out of scope for today Yeah, anyone want some stickers? Stickers yes I'll be hanging around. I have a few I have some more demos, but I think you probably I think it's been a lot So if you are coming to Faust M I'm gonna give a simpler version of this talk at Faust M And if you're coming to config management camp then I'll be giving two talks One which will be sort of like a basic talk and the second talk which will be a different time Will be a really nitpicky talk about all sorts of niche things that you might not have heard of One last quick thing before you leave if you go to the project page We've actually tried to make documentation not entirely suck. It's not finished, but if you scroll down We have This little documentation section and I've spent a good amount of time creating a resource guide So if you want to actually help build a primitive resource in MGMT, which is a great way to get involved very easy You can click on this resource guide And go through it all And it actually shows you what methods you need to implement how to implement them and so on and Actually, I totally forgot to mention it. I feel bad one other quick quick thing Which I should have mentioned so My friend Felix did some nice work on this. I just realized I forgot to mention Where is it? So One last thing so there's probably a lot of people that have existing puppet code They want to run and how do you migrate your code to a new thing that sucks to rewrite all your code So when I first presented this tool I pointed out we could probably use existing puppet code So they actually have a compiler that takes existing puppet code and turns it into MGMT graphs So it's not perfect. It doesn't support everything in the world and it's definitely not finished But it already works really well this guy Felix who's not even I'd read had worked on it. So if you do want a way to get all your puppet stuff done That's it. So have a look at that too so much stuff now Thank you so much and I'll sit out here for if you have any more questions. You want to talk about Sticker thanks Yes, do you like a sticker black or white I Black or white will you use it? Would you like sticker Which model I Need some patches There's no deep package at the moment. There's a broken copper I have a broken spec file it worked initially and then I split things in the module This is a can help you One model is used in reclass and it's a tool which Black one or white one I don't know what this recasting is a bunch of I don't know about this tool and what it does so I'd have to look at it in deeper For example if you will use and simple and simple metadata and import it into your MGM You mean for example so Supporting puppet code or say ansible or chef would be harder. I think but if you want to do this It's meant as a migration path But the real reason why we don't want to be the primary language or way of talking to the thing It's we have some patterns which we've Designed which will be extremely powerful and extremely safe to describe distributed systems that we want to build. And the really important thing about safety is understanding what's happening. So we want to have things that are very expressive that make it very clear what is going to happen in a very special language that will make that easy to do. Puppet and Ansible just don't have a way of making that safe and easy for distributed systems. That's why we won't have that as the primary language. So are you going to build your own language? I hate so much that I have to do it, but we have very, very good reasons why we unfortunately need to. It will be a very small language. How close are you? Since day one. So I have actually designs and no real working code. I fucked around writing a lecture a little bit, but I'm just so bad at this stuff that is like, can you help us? I've done it once and I used flex and license. That's okay, help us do that. You're definitely further along in this knowledge process than us, than me. What about templating? Yeah, sure. The language will have functions. Pardon me? The language will have functions and facts as well. Facts will be event-based. Functions will just be raw functions. No, no, no, no. We're not getting into facts about templating page. Yeah. Do we use Jinja too? So they'll write your own? The nice thing about templating is it's a function you implement, which takes variables and produces an output. So we'll probably end up providing one as the recommended one, but it's not something that's a major decision. We'll pick one that's implemented in GoLang. I don't really care so much. This is not something that's a major worry for me. We have the core language built in the function API and this stuff is done, and it will be easy to just drop something in. People who want to do the work will have a bigger vote than the people who don't do the work. What else? You have more things you want to talk about? No, I have any number of things I could talk about. I love the concept. Thanks for asking that. I've had all the questions about how we scale it. You always ask good questions. I deal with hierarchical places for you. You just don't do appear to be in many things. You need some hierarchy. You need multiple administration bases, as we mentioned. I suspect you need a distinction between provisioning and configuration. It's a really strong design pattern these days that you do. The configuration is, you know, in the cloud environment, you launch an engine with one API, and the pattern is that once it's up, it then establishes itself as a member into the configuration of the cloud network. That relates to the SAMA question. You have to know once you've connected, you have to find out what version it is because the configuration was somebody else's problem. Oh, yeah, yeah. The facts will have stuff like that. If you want to look at SAMA version, that'll be a fact that is not a problem. As long as you can establish the collection of those facts as a secret thing so that I can still respond to that and maybe potentially generate a new graph based on the facts I've got. That's the feedback loop I want to establish. I don't want to have to say... So maybe I didn't make it entirely... The collection happens at one point in the graph and then something else happens. I actually need the flexibility based on the facts to generate new graphs. So I didn't get into this in detail because it usually goes over everyone's head. It won't go over your head. The language we're trying to build... I'm trying to flutter him. I mean, no, but it's a tricky thing even for me. So this is why we're building a language because the kind of language we're trying to build is a declarative, functional, reactive programming language. So with the FRP, the facts will vary over time. By Kristoff. The facts will vary over time. So the language basically produces two things. It produces a graph and it produces events that say when the graph changes. We have the graph def-engine which can deal with the graph changes very efficiently. And the language itself, the facts since they vary over time will produce code that generates new graphs over time. So a fact that exists as empty at the start would not produce, say, a resource for the SAMBA portion. But once something is up and running, it sees now, oh, now there's a SAMBA up and running and it's this version that's a fact that changes the code which has a new graph. So I think you will get what you're looking for. Does that answer your question? It will be a reactive language. This comes close to about two old languages in HPC for example, when we're trying to parallelize in the 80s or something like that. 80s? You're not that old. Yeah, if you'll send me a patch I'll come. 90s is a few years of languages coming up around possible data flow and the concept was precisely improved parallelism for algorithms as you are working out your data flow with the objective of mapping two massively parallel instances. So you want to... The early days of map reduce on data flow are all around that basic concept. You want to see the design of the language so that if we are missing something you're involved. I'm interested but also at the minute buried in but also at the minute buried in other pieces. You may have seen Barlink. Barlink? Barlink, the talk was on yesterday about Barlink. It's a because the expressivity the expressivity itself is one thing. But one of the things we really need is the availability to spend money. So any number of APIs that we are configuring the system today, how can you actually define which ones are portable from one release to the next, how can you define all these deep-ass APIs which ones are stable and there's no schema. So the ongoing variety is good but there's also a need to be able to tell somebody to start being opinionated and say well you actually want to be able to stably manage your entire ecosystem with your entire different systems. How do we even describe which parts of this are stable whether it's deep-ass or ansible, whatever else. So those are the kind of questions we're already making sure of. I can't tell for sure but I'm optimistic that the plan for the language will solve a lot of people's problems but I can't prove that until I show you. So that's why I'm trying to show you that part. I mean can you give us 5 hours a week? No. I hear 4, 4, 4. Anybody 4? That's really what it comes down to. My family would not appreciate it. The problem is there's so many people that like the ideas and they think it's great and they want it but when it comes down to helping to build it it's impossible much harder to find people and I mean resourcing at Red Hat That was actually one of the reasons I was talking about the Galaxy thing because if I've got a framework in which I can easily hack up something useful then somebody else can improve over time the complexity of the initial definition of adequate or sufficient resource if that's a barrier to entry a robust model and being insanely hackable they're not conflict but they're certainly a tension. This is why the language has to come before the modules so that's the big thing. I'd like to talk about modules and how they'll only need one module for a thing I'm confident that we'll be able to happen but we just need the language and then I'm assuming my design will work out and I'm confident that they will I'm confident that they'll write a single module the oh I want this now so in practice the end user will probably never or very rarely write a competitor MGMT code ever again they will need to create a metamodule that just needs to exist together to describe their specific architecture I think the good thing here is that it's very feasible that's a target a set of resources that you can code up front for a inherently cluster service and immediately give people an expressability for that and then if you have that you can draw into it starting with a specific animation like that that's custom fare well we need a sponsor and they seem to be interested but it seems to make sense help us help us all be one or only hope do you have some more questions I just want to I just want to I just want to I just want to I just want to I just want to I just want to I just want topierce what is it? what is it? what is it? what is it? what is it? it's high school, it's high school, it's high school I have an adapter it's a nice place to make a video It's really cool. She's looking around and looking at the rest of the couples, I think that's nice. It's really nice. Good, thank you. And one more thing, you know, it's a java, it's a java, You know, if you're around and you want to do it, you can do it, you can do it, you can do it, you can do it. and you should try to get this. So this is the guy who gave you the wrong person. So you're going to do it, you can actually do it. So you're going to do it. You're going to do it. I don't know. Just starting from the beginning. Sorry. Are you giving out our deals? Sorry? Are you giving out our deals? Yes. Do you have schittum? Do you have schittum? No. Hahaha. Do you know why I didn't read the instructions on the page in the real world? Because I wanted to do something in English. I don't know what to do. Do you speak English? I don't know. I don't speak English. I don't know what to do. I don't know why I didn't read the instructions on the page in the real world. I don't know what to do. I don't know why I didn't read the instructions on the page in the real world. I don't know. I don't know. Thank you. I'm glad to have you here. I'm glad to have you here. I've been here for quite some time. So, I have to see you in three hours. I'm sorry. You're going to be a teacher, aren't you? I'm just wondering... You were born in 4th grade, and you're going to be a teacher. What? I'm just wondering what you're going to teach us. Well, when you're going to be a teacher, you're going to teach us some sort of... what do you call it? What do you call it? Something like that. What do you call it? What do you call it? I've been to all over the world. I've been to all over the world. You can say that I've been to all over the world. Let's start with the game. We can do something, we can do something. I know what you're going to do. I'll clean the toilet. Clean the toilet, and you'll have nothing. No, but how do you feel about it? How do you feel about it? Well, I don't think so. It's just that I'm a nice guy. Do you think it's time to go to the garage or not? We have one garage, we have one car. So that it doesn't open in the garage and I buy a seat. That's it. It's time to go to the toilet. One more time? Well, if you go to the toilet in the middle of the night, you have to go to the toilet, because it's an original toilet, so you have to go to the toilet. This one is in the canteen. I bought a canteen, and because it's a canteen, when the collector goes to the toilet, there are two primary signals, there's a two-and-a-half meter jack, and it's short on the ring or in the middle. It's the same thing, two-and-a-half millimeters. And no one has to have it. It's just that it's... So that's what it's all about. Yes, it's the same thing. It's the same thing. It's the same thing. It's the same thing. No, it's the same thing. It's the same thing. It's the same thing. It's the same thing. It's the same thing. It's the same thing. It's the same thing. It's the same thing. It's the same thing. Yes, it's the same thing. It's the same thing. In the game, we tend to use... old buildings that cannot be built. Old buildings... there are often people using... and you can be lucky, This is very common. There are many methods, but most of them need to know that it was under some kind of ice. So usually there is a time for a swimming pool, or there are just two holes where the resistance is measured. And if it is not in the water, then there are a lot of holes. Then there is a lot of data about the resistance. There are a lot of people who need to know about the resistance, about the size, how much they carry. Most of the time it goes without a wire. And of course a lot of people use argonium. For some reason they have holes and the argonium doesn't. Terrarium, aquarium, that kind of thing. Of course we are... ... ... ... ... ... ... What would it be? It would be possible to dig it out. But most of us would dig it out and dig it out completely on the ground. And we would do it on the ground. But it's not the same technology. We would dig it out and dig it out. So we would dig it out and dig it out. And it would be the same. It would be enough to dig it out. It would be enough to dig it out. Of course, there are times when it's not worth it. That it can still dig out the ground. But it's not worth it. It can be theoretically long. But it's something you can do with the wall. You can do it with the wall. So it's something you can do with the wall. It's not used that much. You can just cut it out and dig it out. And you don't have to dig it out. What's the difference between the two? On the other hand, what's the difference? They're made of arduino. So they're made of arduino. Of course, they're not the smallest arduino. They're like the chips. They're a bit different. It's a bit different. But it's a bit more efficient. What's the difference between the two? The difference between the two? The difference between the two? They're a bit different. They're a bit different. The first thing is that you have Linux. In the repository arduino is used for 3 years. And it's used for 6 years? Yes, it's used for 6 years. The cc is used for arduino. Because it's good for arduino. arduino is used for arduino.org. It came to this in the beginning. But now it's for friends. You can get it from one to another. Because it's one and the same. But it happened a month ago. I think it happened last year. It happened last year. It happened last year. That's right. But it was... There was a lot of money. There was a lot of money. They're cool guys. What's the difference between the arduino and the cc? Each one is different. I prefer arduino. Because... The biggest advantage of rapsber is that it has an operating system. And the biggest flaw of rapsber is that it has an operating system. So there are usually complex things on the rapsber. For example there are sensors. And... I don't have a phone. But there are sensors. Which are very much used for clocked games. And it has arduino problem. Because the operating system... Because the operating system... It has to be used for special modes. Which are directly on the screen. And they are able to use direct memory access. To control the clock on the pins as needed. It's a special problem for... It has special problems. Sensors which are called... One water tank. Because arduino is a standard. But there are water tanks. Where I have a water tank. And I have one data tank. And on that data tank... I wake up the tank. And... Then I start listening to it. And it has to be used for clocked games. So it has to be used for rapsber. And it has no problems. So on this one... It's a very popular warm-up. DS18B20. It has to be used for one-wire devices. Which is a standard. Standard. Or it is standardised in... Dallas. Instruments. Texas Instruments. No, Texas Instruments is also Dallas. Dallas is standardised. And the device is really used for clocked games. So it has to be used for rapsber. And it has to be used for clocked games. Which communicates with this warm-up. But it's really common that this module... To get the data... There are some control systems. So it just... Just flows the data. And I have to control it. If the control system is higher or not. I will get information about this control system. But in the application logic... I have to believe... That the control system is just higher. And it takes 10 times... I have to clean it up. Than the control system is higher. For some things... The rapsber... Is an interesting situation with some gateway. Some space. I just collect the data. With Arduino... To the rapsber. And from there... I have to cooperate with the machine learning. And... With all the functions... And... Of course... The rapsber is installed... As a rapsber. I am surprised that it will work very fast. Because of SD card. And of course... The rapsber is not done... The rapsber is not done... To keep it warm. Some kind of money... It's not designed for that. It's not designed for that. Most of the microcontrol... Is very likely to last. But the electronics... Is designed for that. When you take... I am not a specialist on electronics. I am interested in... The software... That knows something about electronics. So... The components are more quality. And then... The resistor is not worth 10 calories. But it costs 3 crowns. And then... The components are... For example... The company that makes the industrial gateway. Which is a very good computer. It's just... It's a smart module. It's milled, it takes the computer. There are no spare parts. All the connectors are covered. All the connectors are covered. So that it should be covered. I don't know how many they say. They have 44p and 44p. So it's worth 10. It's worth 2300 dollars. Yes. It's worth 30. Not enough. And that's a big problem. That's... I will get another car. So that I can show you some other parts. Well, I will show you something. I will show you something. It's a smart module. It's a smart module. Yes. So it's worth 30. So it's worth... No, we're going to discuss this. And then we'll have to... And then we'll have to... So it could be fixed. I don't know. I don't know. No, I don't know. I don't know. I don't know what to do.