 Oh, thank you. I'll replace the one in there. Should I just start? Okay. Hey, everyone. How's everybody doing today? Thanks for skipping lunch. Oh, or maybe you didn't. Some of you looks like you have food. My name's Clint Savage. I'm the senior systems engineer and the continuous info team at Red Hat. I've been there a little under a year. And I've been working on a tool that I want to talk about today called Linchpin. So, early in the day, like before I came along, there was this tool called Provisioner 1.0 and it was this powerful thing, right? But it was cumbersome and complex and I apparently don't have art. And that's horrible because I wanted the art. Let me reload it and see if it shows up. If it doesn't, I'll fix that. That's horrible. Let's try that again. This is awesome. I've just perfectly timed that. That was like the best. Seriously, come on. Yeah, I just plugged it in, but it's still going super slow. I'm turning off the Wi-Fi and reloading again. See if that's faster. Oh, there we go. Oh, look, art. So, we had this cloud factory, right? And it was powerful and it was useful and it had a lot of cool stuff, but it was very cumbersome and very complex. And to show you the complexity of it, I want to start by just kind of showing the installation of it. We went through this the other day and it was pretty interesting to see how hard this was and I'm going to kind of scroll down just to kind of show you what's involved. Yeah, I have to scroll. So, yeah. There's version seven. I don't know why it has to do all of this work, but there's a repo at infestals for you. And mind you, if you tried to uninstall this, right? So, there's an optional repo, so on and so forth. Oh, let's see here. There's another one. Oh, look, we installed WGIT. How nice of them. Let's see what else we got here. Oh, Apple, that's nice of them. At least I use Apple, so that one's okay. Libx, SLT, Python. Oh, and if you're a beaker user, now you have the beaker client. So, if you've never used beaker before, there you go. And to point out, this was an internal Red Hat script, so it's a little bit kind of crazy. So, we kind of wanted to clear this up and not have maybe so much code, but we also had a CLI that was... I'll just let it speak for itself. Some of them were very useful, and some of them were, you know, and if you didn't know the keys, you had to look them up, and the documentation was okay, but generally we would take these from the docs. This is actually directly from the docs, and if you go to my slides, the link is there, and it'll actually let you get to it. If not, I'm sorry. But the other thing that was, it wasn't open source. No thanks. So, a little story here behind this picture. This is my friend, Ricky Ensley. Anybody know who she is? Okay, a few of you in the room. She was in Australia a couple of weeks ago. I think she got back last week, and she was in her flat with her host, I believe, and she saw this spider, and she thought it was this huge spider. She's like, I'm scared of this spider. It's humongous. It's about this big. The lady that she was staying with says, oh, no, that was a small one. And I thought, oh, that's pretty scary. And I'm like, no, I don't want anything to do with it if it's that big, and I'm guessing they're bigger. And so I thought it was pretty interesting. So if it's not open source, we really don't want it. We wanted to kind of fix that problem as well. So we came up with this concept of what we wanted, and some ideas, and first one was simpler. Simpler is better, right? So it's cleaner, installation, simple topologies. We'll talk about topologies and that sort of thing. Obviously, it's open source. It also is vagrant-like, and it's a simple revision and teardown. Or if you want, happy little clouds. If you don't get the Bob Ross comment or a reference, I'm sorry. And one of the other things that it does is it's easily extensible. So if you wanted to take it and do something with it yourself, you can. And it's also where we actually generate ansible inventories, where we're going to be able to generate other types of inventories for other tooling as well down the road. And a lot more. So enter linchpin. It's deadpool-approved. So these are, I was looking around for linchpins, and what I found was, okay, this one's a pretty nice one. But then I was told to use a grenade by someone, and I saw the deadpool grenades, and I thought, oh, it has a linchpin in it too. So, and he has like a little catchphrase, or the thing on there says catch. That was pretty cute. And it's completely written in ansible. That's the number one on the list, and so we already are using ansible, so why not take advantage of it? It gives us about 90% of what we need. So we're doing a lot of cloud provisioning and things like that. Right out of the box, that ansible does like, let's see, OpenStack, AWS, GCE, Libvert. We have a beaker one that we wrote. Some other ones that we'll talk about as we go along. And it's also asynchronous, which is a key important thing to us, because we want it to be fast also, and be able to spin up these clouds quickly. So, say you have, you know, 10 machines you want to spin up, and spin them up all at the same time, and we'll gather that data for you, and be able to pull it back down. And it also has good documentation on ansible, so we want to be able to use something where we could read. So let's talk about linchpin a little bit before we get into the details. So linchpin takes what's called a topology file, and it passes it into the linchpin command, which is just the only part of it that's not ansible. Really, it's Python, but it's the command line interface that we wrote. And from that, you can generate any cloud you like. So AWS, you know, OpenStack, GCE, beaker, Duffy, pick something. And if it's not there, we can write an ansible module for it as well, and they're really easy to write. So we're hoping to look for people to be able to do that. And from that, you get these nodes and instances that you have spun up, whatever cloud you want. And you could take multiple of these. So if you wanted AWS and OpenStack and, say, GCE, maybe all three of those, and you wanted them all spun up at the same time, you can do that with a single topology, which is kind of part of our goal. One of the things you think about there is, what if you have quotas, right, on your cloud? Or what if you only have a certain number of budget? And when you hit that budget, you want to maybe switch to another cloud. But you want the same types of nodes up and running. So you'll be able to do this. We also did target this as a continuous infrastructure or continuous integration component so that you could spin it up, do the thing you need to do, and tear it back down really quickly. So that's another piece of that. From that, we take these, the clouds, we get the data back from LynchPin and we generate an output file, which that allows us to track all of your information. So we track all of the different things for you. You obviously get that information if you did it by hand. And then from that, along with what's called an inventory layout, we generate an Ansible Inventory output over there. So then you can do stuff with it that's really cool. Oh, look, and it's maximum effort, right? There's more Deadpool references there. So taking this inventory, and I just traded an OpenStack one, and a set of playbooks that I want to maybe build an OpenShift cluster, I can then run Ansible with this OpenShift set of playbooks and this Ansible Inventory against those nodes. This isn't stuff I'm going to show later, but if you had these nodes up and running, you already have an inventory now, and you just have your playbooks. That's all you need. Everything else comes with LynchPin. And once you get is your OpenShift cluster, and literally I can do this in about 10, 15 minutes tops. Sometimes less, just depends on the infrastructure I'm on. Inside of Duffy we have, which is the continuous CI stuff for CentOS. They actually use a tool called Duffy, and Duffy is a provision of bare metal machines at the moment. And we can actually spin up an OpenShift cluster there in about six minutes, and that's actually pretty fast. So that includes configuring everything, pulling down the repos that you need, and this is all in the books up there. Everything else is done by LynchPin, so it's pretty fast. So I talked earlier about installation. By the way, if you guys have any questions, feel free to ask them. I know you're all starving, so you want this to go as short as possible, right? Okay. So installation is simple. Right now I just have a simple virtualM that I activate, and then I do a PIP install. Eventually I'll have an RPM that's available. It's already out in the public, so it's already on PIPI, so if you just type PIP install LynchPin, it should actually work for you right now, if you want. So check it out. Let's talk about those topology files that we showed in the diagram. This is probably the most interesting part, actually. What you see here is a very simple YAML document. I can give it any name I want, and I specify any resource group name I want, and I can have multiple of these if I want to, and I'll show you some examples in a little bit. From that, I can say, hey, I want to provision a Duffy system, and I want it to be of type Duffy, and this is the names that they'll get. So when you spin them up, they'll actually be called DuffyNodes0, DuffyNodes1, whatever, depending on what infrastructure you want or on. This is rel7, architecture, xid664, and I want three of them, so I'm going to turn on three nodes with this example. From an inventory layout, it's a little bit longer, but the concept of an inventory layout is, let's just say for OpenShift as an example here. How many of you guys have used OpenShift before? Okay, so anybody use OpenShift Ansible before? That's what this is basically emulating. It's basically taking all the variables that OpenShift Ansible needs to run the OpenShift Shift Ansible installer. So what I've done is I've said, okay, from the top down, which apparently I haven't scrolled down, I have some generic variables which I'll talk about, and then I have one, two, three hosts, which earlier I pointed out that there were three hosts. If you have more here, it'll work just fine. If you have less, it will actually tell you that it can't generate an inventory, but it will finish the process. So it'll still spin up your nodes, and then just say, sorry, I can't spin up an inventory. So you'll need to do that by hand. If you have at least the minimum number, you should be fine. So then we actually have our master, which is in these three groups, our just plain old node, which is in this group, and there's one of those. So if I wanted to, I could have two nodes and just list that here. And because I was building things from scratch, I was actually building OpenShift from scratch. I wanted to build my own repos, and so I built a separate host, but these two are almost identical. It just has this repo host in it. And then down here, for the particular types of groups, and if you've ever seen an Ansible inventory, it might make sense what these lay out as. This would be the OSV3 group, and these are the variables that go inside of it. This is the children, and these are the nodes, like types of groups that might be in that child. Does that make sense to everybody? Any questions? Yeah? Why the pipe character? I'm sorry? Oh, so that's a YAML thing. Oh, so his question was, why the pipe character here? So that's actually a YAML thing allowing me to print multiple lines. And because it didn't quite fit on this slide, I put it here, but I actually could have multiple registries as well, so I could have listed it three lines long if I wanted to. Your question is that this doesn't have to be a string? Oh, so his question is, will it parse correctly if I don't use the string line continuation in it? Well, it'll be parsed just fine. It was just because I believe we wanted to have like three different registries at the time, and I took out all of that and just left one. So my example might be a little extraneous for this purpose, but it does work. So the next question is, do you have multiple lines here? And it really depends on what application you're talking. So if you have multiple lines here, will it do it like a list, or will it do something like that, right? Yeah, so it's application-specific, and that's important, is this was for OpenShift. I'm not doing one for, say, like a web, you know, some web thing I wanted to set up. So this is just specific to my registry, and that's what I was doing. So whatever you wanted to apply it to, that's the flexibility of the layout itself, actually, which helps me prove my point, is that you can lay this out any way you want, and then it'll just turn it into an inventory. And I'll show you that inventory in a little bit, how this lays out, and I'll do a demonstration in a minute. Any other questions? So the question is, can I run it against just public clouds right now? And the answer to that question is, there are many clouds you can run against. I have one currently in development for Libvert. We run for Beaker, we have one for an internal one called Duffy, which is just at CentOS itself. So if you have something that's Ansible-ready, like a Provisioner in general, Lynchman can run it. So it's a matter of, I think maybe a few days of development to get it in there, but it's just mostly like making sure you write tests and documentation. It's actually pretty simple. I have a, we have a team that's already written a few for us, and we'll talk about that more as we get to the near the end. Any other questions about the layout? Because this is pretty critical, and it's one of the more important parts of it. So to bring those two pieces together in Lynchpin, what we did is we said, okay, let's take the topology in the layout and put it in a single file. Any of you guys use Vagrant before? This is going to feel very Vagrant-like to you. So they have a thing called a Vagrant file. We created a pin file, because we like to copy things that we think are good. And in terms of this, what we did is we said, hey, this is the name of it, and this is the topology, and this is the layout. And so when you actually run this script here, which I'll show you how it works in just one moment, it will actually spin that up and apply that layout. So here's kind of what it looks like when you build this. You could build it in any directory. It just drops the pin file in there, and then it creates a topology directory and the layouts directory for you. Sure. So the question was, will it ignore the things that are not in the pin file? And the answer to the question is yes. And in fact, we're working on a feature which is not working right now, but it should have been working. When we actually launch it, you can specify it by name, which is why we have that there. So that's actually pretty useful as well. So you could do all of these by generating the linchpin init, which generates your pin file. So kind of answering your question and adding to it. Then we turn it on and off with rise and drop. So you can do linchpin rise, and it turns on all of your clouds, or you can say linchpin rise, ae2 test, or whatever I wrote there. And it will just turn that one on. Currently, that's broken. So it's still in development, and I know there's some things that we need to improve, but that's why I'm here. It's just introduced to you guys and hopefully get people to help. Other things that it provides is a configuration and a validator. So if you want to update the configuration, say, or regenerate the configuration, it will actually generate what's called a linchpin config, which is generally only used for developers. But you can also validate your pin file. And so that actually gives you the ability to validate the YAML init, make sure that it actually would work. And then from here, we also have the ability to get and list our topologies and layouts. So if you were, say, for instance, had a couple get repositories or a single get repository with your topologies and your layouts, and you want to share them, you can use it as an upstream, essentially. And then you say linchpin get, and it'll pull down that topology structure, whatever the name is. So it's one of the things that we're working on, but it works currently with a get repository. It's pretty cool. So store them there, and you can do that. So let me give you a quick demonstration. Well, Shaggy's going to do it, because he's cool. So let's talk about that. So what I'm going to do, I don't know if any of you guys have ever heard of play it again, Sam. But play it again, Sam, is really cool, because what it does is it lets me just mash on the keyboard and install things and do magic stuff, but it prints it out for you. So here's the installation that I would normally do right now. So if you're lazy. I did these real-time, but I don't want to do them on the internet, but here it sucks. But I really did this, and then I actually just recorded it. So now it's installed, and then what I can do then is I can then run the linchpin init. So I'm going to create a directory called lpdemo, go into that directory, and here's the help just to kind of give you a clue of what it looks like. So I've got config, drop, init, inventory generation, which we're working on, isn't a feature that works yet, but we'll talk about that later. Layout, topology, of course, rise, and of course validate. So most of what I mentioned, one of the things that we're working on is a work directory, or what's the thing in Jenkins that they call it? Workspace. Workspace. So that'll probably be a variable that you can use later on where you can provide a workspace variable, and it will just drop it in that directory instead, or look there, depending on what you're trying to do. So that's something we've got here, and that's actually how that's going to work. A couple things here, you'll notice that there's a lot of language that needs to improve here, and we have typos and other things that we're working on. But I can run a linchpin init, and it generates a file called pinfile, and it also generates these directories. Okay? So we've got our pinfile, we've got our layouts, we've got our inventories, which we are is where the things get dumped, and then we've got our topologies also. And so let's cat that pinfile. Here's a simple pinfile, and this is what gets generated for you. You specify the topology name, and the inventory layout looks just like this, and they've got a couple examples in here, but I'm not going to use those for my demo, so let's see. Okay, now let's move to the next one. Questions so far about any of it? Making sense? Good. So let's go through and just create my own now, and like I said, I'm just mashing on keys. So here's what my pinfile looks like. Pretty straightforward. It has the simple AE2E cluster and the OpenShift3 node. So you saw the OpenShift3 node up on the screen. It's a little bit different. I modified a couple things. But here, in topologies, I have these three files here, and you kind of mentioned earlier that if you ignore it, those other two aren't in there, like the duffy one isn't in there, so it won't even use it. And here's my layouts. Let's see into that one. They show the actual files themselves. So here's the layout, the topology, excuse me, that I'm going to use. I've got an OpenStack type of OS server. I'm going to spin out M1 small, and this is the image I'm going to use. I've got three of those, and I'm actually going to associate a key pair. So for instance, if you wanted to, because OpenStack supports cloud in it, I can inject things. So if I know where my CI factory keys are, I can provide those as well. It also has the ability to associate credentials. We have a really okay way right now, but we're actually working on an authentication driver for that. Yes, so the question was, are the resource definitions dependent on the resource type? Yes, we actually have a schema that we define, and when you're a developer, if you were to build another cloud, you would actually define your schema, and then it would show up here, and we have examples for like 20, I think, or 10 different clouds. I won't say 20, probably closer to 10. When I'm done, I'll show that, actually, so you can see what they look like. No problem. So that's our basic topology that we're going to spin up. We're going to spin three of them up, and then I'm going to go ahead and apply, any more questions? Okay, apply a layout similar to what we saw on the, right there. One thing that I didn't mention earlier is, you see those IPs up there with the dunders and dunder IP at the top? Those actually get translated to the actual IP, so we have a translation process inside the OpenStack module that we wrote to translate them into it, and we do it in most every one of the modules, so it's more of a generic component. So that way, when you generate the inventory, it will create the IP or whatever, and then in OpenShift hostname equals IP, OpenShift, blah, blah, blah, hostname equals the other IP, and it'll be the same IP for both hosts, so it's pretty slick how it works, and that's the end of that one. So let's show how that works in real life now. Top-down. So the question was, how does it know which one of the IPs is going to map? Just like Ansible, it does it top-down. So we're going to assume that the top machine that's provisioned, whatever it puts in the outputs, we're going to use that as our master in this example. Does that help? Okay. I thought, why would we rewrite the wheel just to use what they've already given us? So I'm going to run linchpin rise. This one goes really fast, and it's going to output a bunch of Ansible, but you don't care about most of it. We're going to hide most of it and put it in a log. So what you see right now is going to probably be gone, and it will do more of, like, we're doing this step and we're doing this step. So you can kind of see a bunch of output, but right here it's actually provisioning my machines, and since I did this inside Red Hat, they say redhat.com on them, but I provisioned hundreds outside as well, specifically on Duffy and like a few on AWS, and there it's done. So let's have a look at what it actually did, right? So it created an inventory for me, and let's go ahead and show what that looks like. So let me scroll up a little bit. So right here, I've got my OSV3 children with masters and nodes, all my vars are right here, and here's my groups, repo host, here's its IP, and earlier I mentioned these variables that get assigned IPs, so there they are, and that all gets mapped for you. Pretty slick, huh? Everybody liking this? You skipped lunch for this. It's awesome! Get excited, guys! So I want to prove that it actually went up though, so I ran Novelist because they did this on OpenStack, right? So there's my nodes, and they were actually up and running, and they're happy, and to make it a little bit more fun, I pinged them, well one of them anyway, and I also can SSH them. Remember that key I mentioned? There's my CI factory key where I put it, and it lets me SSH into the box. If I wanted to do more, I could do Ansible playbooks against this in any way, shape, or form I want, right? So I have the Ansible inventory, I run Ansible playbook against it with my playbooks, and I'm done, and now you can provision this to do whatever you want. So I can see the uptime, and I can see that, oh, I lied about this, don't look at that. It was a Red Hat box, and I said send to us, and I was doing the demo, and I forgot to switch that one thing. I'll fix that later. So that's the end of that one. Any questions about rise? Okay. So last demo is the drop, which if you kind of imagine what rise does it, does the opposite, it drops everything. So it turns off all of our systems, and deletes our output file. It does actually keep the inventory though, so if you want it for later, for reference you can have it, but it will overwrite the inventory the next time you do a rise. As far as outputs go, we actually are planning on tracking them. So when you spin one up, it'll have a unique ID, and it'll actually store it somewhere, and so that we actually can recall it later if we have to generate inventory from it or something like that. So that actually generated, or turned it off, you guys kind of saw it spin down, and I think if I run an overlist, they're not there. Any other questions? Yeah. What do you mean why it wasn't, it was actually, that's all it was. What it's doing is it's a bunch of like playbooks to go through right now, and see which type of inventory you have in the outputs. So it's looking at the outputs saying, which ones do I need to turn off? And so it's actually processing that with Ansible. So if we scroll up, you can actually see that. Yeah, a lot of this is just, see like right here, it's just going through common, looking for a topology file. And then it does look for other resource groups too, and it skips them if they're not there. And then once it finds it, whoops, that's page up. And this is why you don't really need to see all of this. Like it's not really necessary. So for the most part, this won't show up in the near future. In fact, I expect a logging function to be before we hit 1.0. So we're at like 086 right now, targeting 0.9 by the end of February, and by the end of March, we're targeting 1.0 somewhere in there. Maybe, maybe April. I'm not going to tell you when it's going to happen yet, because I don't know. So that's the end of the demonstration, but one of the things I want to talk about before I finish up is contributors. And that's actually really important to us. We want to get this out, and it's open source, right? So we've got it out on GitHub right now. In fact, if you're interested in my presentation, it's actually at this CentOS PaaZsig, which I'll have at the end also. But this presentation is currently in a PR. So if you're interested in contributing, we are really looking forward to contributors. We really want help with it. And we're actually asking for contributors. So one of the things that we did was we brought a contributor with us, David. And he's going to talk about this tool that he wrote that expands on Lynchpin. I think it's on. I'm not really supposed to be here right now, but I'm here, and I'm going to just talk for a couple of minutes about how we're collaborating with the folks that are developing Lynchpin. I work in QE, and I do a lot of work with Jenkins. And Cinch is a tool that allows you to create and configure Jenkins slaves, Jenkins masters. So in QE, we do a lot of provisioning. Our workflow is basically to create an instance, run some tests, tear it down, and release those resources back to the pool so other teams can use them and things like that. So with Lynchpin, we're able to also add clouds or other services that we care about that maybe other teams don't use, and we were able to contribute the beaker module to provision in beaker using Lynchpin, which is a big part of the work we do in QE is allowing teams to do hardware enablement testing. And Cinch is basically a tool that sort of wraps around Lynchpin, and we ask Lynchpin to provision things at the right moment, and we insert our own Ansible playbooks to configure a Jenkins slave at the right moment, and if you want to deprovision, we run Lynchpin drop after we do some work. So we're basically inserting our own playbooks in and around the Lynchpin provisioning at the right moment. One really good idea that Clint had earlier this week was to have hooks in Lynchpin where... We'll talk about those more in a minute. We're going to talk about this later, insert playbooks that users can provide at certain stages. One of my favorite features and one of the things we really utilized a lot in Lynchpin is the layouts. And when we provision these Jenkins slaves, we have a lot of variables that we provide as defaults for our users, but they might want to override those variables and provide their own configuration data to control how the playbooks operate. And with the Lynchpin layout files, we can just stuff some vars in here and it will overwrite the defaults. And users have an easy way to configure what will the behavior be of the playbooks that we're providing. Thank you, David. So he's going to stay up here because he's been helping me with some of the things. He actually helped me also present at QEcamp. So we weren't sure if he was going to be up here or not because we weren't sure what our timeline was. So some of the things that we've actually got coming up, and this is part of where we're interested in your input and also any help you want to provide, like being a contributor in any way, shape, or form, is we've currently got PIP and RPM should be there in a couple of weeks' tops. The ability to asynchronous... This asynchronous sleep provision, what you saw me do right here was not asynchronous, but if you want it, it's just simply typing in async equals true in your topology and it just turns it on. And then Ansible takes care of the rest. We also have that simple CLI you saw. It'll be better. It's going to get better over time. And we actually have an API that we wrote so that we can actually code to it. One of the goals that we are planning on doing in the near future is being able to have some sort of dashboard to show you what's up and down. Additionally, we're supposed to be able to gather information from the clouds that you have based on your outputs and generate topologies for you. So that's another goal that we have that we'll be working on in the near future. Some of the other things that are longer terms in the future are satellite, which David and his team are planning on helping us write, but it'll also be out in the open. And we're planning on maybe considering OpenShift as a provisioning component. So we don't currently provision to an OpenShift endpoint, but we could. And we're just not sure how we might want to do that yet, so we're looking into it, yes. So the question was, do you mean deploying an OpenShift cluster, or do you mean applications and services on top? And the answer to that question is yes. So in both cases, we currently already can provision OpenShift cluster without doing anything. In fact, I do it every single day on my CI. And I actually build it from scratch to do it. So then I use linchpin to launch the systems to do that. Another thing that we're thinking about doing, oh, sure, follow up. The question was, do you do any testing of upgrade scenarios on OpenShift specifically? Not yet, but that's a thing for our team that we're working on, so it's definitely in the plan. The other things that we're working on is a vagrant plug-in. So for those of you who are vagrant users, you kind of saw that linchpin rise and linchpin drop are similar to vagrant up and vagrant down, or vagrant halt. It's very similar in destroy, actually. I think it is the better comparison. It's destroy right now. We'll probably have a halt at some point to turn them off, but we don't have the ability yet to re-provision them or return them back on again yet. That's obviously in the roadmap as well. One of the things that we were talking about with regard to this is because Cinch, the way that it leverages playbooks, we might actually implement what's called hooks. If you think about it in terms of get hooks, where it's like a pre-receive and a pre-commit or a post-receive and post-commit, depending on what you're doing, we might want to have a hook in place to do something in advance or before. For instance, with Jenkins, you might want to disconnect your slaves from your masters before you shut them down. So we might have a pre-drop hook, where we want to tear that machine down, but we want to do it properly in that way the master actually knows before you turn it off that that machine's gone. Otherwise, it just tries to connect to it still and it's failing. That's not a good thing. So we want to be able to put anything else there. Any thoughts on that? Anything else you want to add? For the Cinch project, I feel like that future would be a big deal because sometimes we need to do things before we tear that in these instances to do things in the right order improperly. So I think it's a great idea, and I think we'll definitely be utilizing that for RCI work. Cool. Another thing that we currently do... Actually, I don't know why it says upscaling. We do know upscaling right now. What I meant to write there was no downscaling. We don't scale down. So if you want to scale from three to six nodes, it's not hard at all. You just change your number and run the command again. So just run linchpin rise, and it will scale up to six. And then cloud bursting, which is kind of that concept where you have quotas or you have limits on your one cloud versus another, and we want to be able to launch to a new cloud, possibly, or turn off all the nodes over here and turn them on over there, as the case may be. And so those are some things that we're thinking about. If you have other thoughts or interesting things that you want to share, I'm very interested in hearing your questions or things like that as well. So to that, I have five minutes left before the end of the session, or before questions. Okay, well, we're basically at Q&A now. So it wasn't too short, but thoughts. So the question, let me grab a drink really quick. So the question was there's not a lot of challenges trying to get the right targets to the right product. Is that how you have stated it? Right access to the right content. Okay. Is there anything that linchpin can give me to do to help with that? I think one of the things that we want to be able to do is have the ability to track multiple different clouds at the same time, and that might be something that's addressable there. I just don't know exactly what you're after. Is that close? We can talk after too. Okay. And you can give me a better scenario that you're thinking about. Any other questions? Absolutely. We want to deprecate the old provisioner out. I know you saw the install.sh script. I never want to look at that again. So we definitely want to get folks using this and testing it out and collaborating to improve it. Any other questions? Okay, we'll give you... There's one more. It is targeting anything and everything you can provision anywhere. The question was, is this... And we noticed that what's sent to us, PazSig, is this something that is just targeted to, like, certain infrastructures? And the answer is, really, we're trying to provision anything you can think of. That's actually why I'm really targeting Libvert, as an example. Maybe down the road, we'll target some more local type clouds. That's still what I'm currently targeting, but there's others. And the idea being is maybe we want to get this closer to the developer. And so I want to get this as close to the developer as possible. So if you're running tests every single minute and you can't do it on your local machine with containers, you'll be able to spin up a simple set of Libvert nodes, run your tests, and tear them back down in short order. Yeah. The leveraging topologies... I mean, layouts, one? This thing here. Oh, the example I gave with hooks. Okay. So the question was, with hooks specifically, wouldn't it be easier just to put, like, a set of playbooks in order to do that? And you could do that in that order. And that's actually fine. You could totally do it that way. The idea for us is you have a bunch of playbooks that say you want to make sure that that happens every time. And you have people who just want to use your code the way they want. And they don't want to worry about that. So we'd write a simple hook that would basically... And this is something that we're still thinking about. So it's not something that we're guaranteeing we'll do, but if you have a better idea, I'd love to hear it. Essentially, being able to spin up a node, do the thing you're doing, and when you want to tear it down, before you tear it down, you disconnect it in some way with a simple ansible playbook. And then it runs whatever job to do the drop. And it could be in the same playbook, or it could be external. It really wouldn't matter to me. I think more of it's a generic case in this scenario. So I was thinking about generic cases. So... Yeah, so... Okay, so the comment was essentially that, you know, maybe not use hooks and use it maybe in an ordered playbook style. And that's kind of Langdon's idea. And I'm not sure yet, so I appreciate the contribution because we're really kind of trying to figure out what we want to do with that. And it's probably going to be a proof of concept that I'll have to do first, and we'll see where we are. But I'm not sure. Sure. So Adam's question was, is there a target for production or something to that effect down the road? And yes, is the answer to that question. We actually were in a meeting with someone recently who wants to be able to take his open stack cluster and be able to get all the data from that by using a topology generator and then tear it down and rebuild it. So it's a good example of maybe something where you might do this in an environment where you're also doing scale up to or other things that you might do with the production environment. And I don't know how far we will extend that. I don't know. Like, I want to keep it kind of simple, and that's really the main goal is simplicity. So I want to be able to be as simple as possible, but also be able to give you the functionality that you need, which is part of why the CLI is as simple as it is right now. Anything else? Yeah. Right. So the question is, are you thinking about using inventory, something like Foreman, right? That actually has been something we've been thinking about, and we didn't have it on the slide because the last presentation we got, no one addressed that specific thing. And so we weren't sure if we should approach it or not. It's definitely on the table. It's just I'm not sure exactly how... If you're interested in contributing to that, we'd definitely be interested in hearing more about how you might want that to happen. Oh, look, more questions over here. I'm not surprised. I love teasing you, Adam. Don't take it personally. Yes. And excuse me, the question was, if you do a similar inventory management, can that be optional? I wouldn't say anything other than optional. In fact, our inventories here that are the output ones are obviously optional. So you don't have to use them. You just want to spend something up to use it just to SSH to it. You can do that. And that's kind of the idea. It's the same thing with... And I know that's the other side of it, but I'm just saying, in this scenario, we want to make it as flexible too, and that makes it simple to me, actually. Simple and easy are not the same thing, right? So simple means that you can use it, and it's simple to get it working, but it doesn't mean that that's the only thing there. It just means that it's easy to use from the ground up. I was trying to not use the word easy, but I did anyway. So any other questions? So please check out our website if you are interested. The slides are, again, on GitHub. If you go to github.com.herlo, that's me, and I have the link in there, too. So that's where my presentation is. That's where all this stuff is. You can get to the upstream from there. CentOS PazSIG is a lot harder to type. Same with RedHeadQE, I think, than Herlo, but... Thank you. Do test your just-a-head tip if you need to med, whether you use RG45. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... I know a lot of the decent amount of things, but I'm not one of the experts. Oliver is a great resource and I've spent a lot of time working with him. He was also a very, very good volunteer. I may have seen him before, but I don't think we're talking about you. I don't really have that. I don't even know if I've seen him before. I wouldn't define that there are 10 minutes left, and then there's like 5 minutes, 1 minute sign, and 10 minutes for Q&A, so we're trying to put it into 15 minutes, so we have 10 minutes, 10 more minutes before the next one. So, 40 minutes talks, 10 minutes, Q&A, 10 minutes. Yeah, we'll show you. 10 minutes before the question? Yeah, 10 minutes before the question, 5 minutes before the question, 5 minutes before the question, then the questions, then I will be holding the time and I will show you the last question you were supposed to be asking. Aha! Aha! There is actually, yeah, I found a baguette. There is a piece of paper inside it. Do you want to inform me? No, I won't. No, we'll make sure that you don't. I mean, probably come behind me.