 Well, this one, that's for sure, okay? Okay. We'll foster, we'll talk about OpenStack. Hey, folks. Thanks for coming to the talk. So that's partially correct. This is a little bit about OpenStack, but it's also about a tool that we've started to develop along with using things like Form and Enansible to massively provision and automate infrastructure. So before I kind of get into how we do all of this and these methodologies that work for us, it'd be helpful for us to explain what I do so you can understand why we've gone the direction we've gone with our tooling. So there's not enough car analogies in open source. There's never been a bad car analogy to explain Linux or open source. It's never happened. So I'm going to give you a pretty terrible race car analogy rather than bore you with some multi-paragraph job description. So I work on the performance and scale team at Red Hat. I'm part of a two-man DevOps team who supports a large amount of internal high-performance infrastructure gear, servers, switches, storage. And our job is basically to, I lied. I said that I wasn't going to explain it that way. Let's go to the race car analogy. I think that works better. So think of high-performance computer servers as race cars. There's different sizes of different shapes, but they're always really quick and they're really fast. And think of high-performance networks as the race tracks in which these cars drive around in endless loops. They try not to crash and burn. And think about race car races, the actual races of the cars running on the tracks as performance and scale testing for various pieces of software. So using this very terrible analogy that I just made up, the race car drivers are basically the performance engineers who would test and vet things like OpenStack, OpenShift, REL, any sort of product that Red Hat or partners might use or upstream products, they would be the race car drivers. So who would not like to be a race car driver? That's a nice profession to glorify yourself in daydream about. And we are simply the pit crew. We're simply the engineers in the pit crew that make all the race tracks work and all the cars on the race tracks operate. And one of our goals is to schedule as many races as we can with all of the resources we have and be as efficient as we can with it. So we developed a tool called Quads, which stands for quick and dirty scheduler. It's not an installer. It doesn't do any provisioning. It's simply kind of a shim or a wedge that works with other ancillary tools to fit some gaps that we've run into in the scale and performance team. So it's important to just kind of talk about what it isn't before we can explain to you what it is and then how pieces like OpenStack can form and fit in. So Quads is not an installer. It's not a provisioning system. It bridges the gaps in several technology areas that for our specific use we find the need for some automation and some tooling. It can use form and is the provisioning, but it doesn't have to. You could backend it with Ironic. You could backend it with any sort of provisioning system that's out there. And it basically just helps us automate all the boring things that we don't want to do. And it helps us also auto-generate documentation in places that we wouldn't want to do it or that we would screw it up as humans. So again, this is just kind of going over what this does. And the whole center core piece of this is there is a YAML-driven scheduling mechanism. So all of the machines and the switches that we manage are part of a YAML structure. And there's metadata or key value pair information about each one of those nodes, like who is the current owner of the machine or what is this machine scheduled to do between this timeframe and that timeframe. So where is it used? Where do we use this quad system that we've put together? We have an internal environment in Red Hat called the Scale Lab. It's about 176 nodes. We're looking to scale up to 500 or 1,000 in the next year or two. And it's really meant for high-performance R&D testing of different products. This wouldn't be where you would go to get hardware if you were an engineer. This would be a place that you already had a use case vetted out. You already had a set of tests on a smaller scale, and you want to see how they perform at a very large scale. This is an environment that requires a lot of spinning up and down of hardware, and it accommodates a lot of short-term use cases. So it's not a place where you would rent a server for a couple months, and you get to know it, and you pet it, you get friendly with it. Now, this is basically an Airbnb sort of a model that you only have a subset of servers for about four weeks, and then they're automatically spun down, and they're handed off to somebody else. And again, this is kind of the stuff that it does, how it's designed. But everything's based around a YAML schedule, that when you run quads, it defines the schedule, and you operate out of that. And we call to other tools. So we might call to some tools that go out to the Juniper switches and set a certain VLAN the right way. So you would basically hands off, have a way to spin up a bunch of machines, and they're passed off to the user without any human intervention whatsoever. So who uses the quads framework, and who takes advantage of this? Right now, just myself and my colleague, who unfortunately can't make it today, we're the only ones that operate it, and the tenants within the scale-ab environment, the back-end usage is completely transparent. The only thing that's facing to them is the documentation that's auto-generated, which we'll get into and how that works. There's another initiative called the Massachusetts Open Cloud. There's some schools, Harvard, Boston University. They have a similar endeavor called Hill. So we've been in talks with them to, there's kind of some overlap between the products, but there's one thing that our system does that might be useful to other people that are doing provisioning, and that's the scheduling aspect and the auto-generation of documentation. If it's not documented, it's not done, and that's unfortunately one of the things that falls through the cracks, and it's also a place where there's a lot of errors, because we're only human. Where Foreman comes in is, Foreman is just a provisioning arm. We don't actually do any provision, and we pass all that off to Foreman. And in a future feature request, we'll actually have Foreman views presented to the user. So if you request a set of 50 machines for three weeks, you'll get IRC notifications, you'll get a couple email notifications, and you'll receive your machines, and then you'll receive a Foreman login that's specific to you. And if you want to, you can log into Foreman and you can reprovision your machines at will if that's something you want to do, but otherwise, you don't worry about it. So what problems are we trying to solve here? There's about four or five main problems that we want to solve. The main one is server hugging. So who's familiar with server hugging? So yes, server hugging, that's right. Server hugging, we want no more server hugging. There's always a greater desire for hardware and resources than there are the actual hardware and resources to give to people, and I think that's a perpetual problem that will never go away. So the best thing that we can do is be as efficient as possible at how we divvy up those resources to other people so everyone can take a turn. So that's kind of the pinpoint of our goal here. This is different than a model where, say, you get allocated budget to buy a couple hundred servers for R&D, 10 or 20, and there's a Wiki page somewhere that has to edit. Someone gets loan hardware for a couple of months, two months turns into six months, six months turns into three years, and that's server hugging. It's your friend, it's your pet, but we want to get away with that model, we want to get away from it by being programmatic about it and being deathly efficient. This is your allocated timeframe. If you want to extend it, let us know, but when that's done, your machines are going to automatically spin down and you're going to get out. So we have clearly defined scheduling for servers, and the main benefit here, too, is with the YAML structure, if we know what development is going to be doing six months down the road or a year down the road, we can pre-schedule every one of these server allocations and network allocations ahead of time. So at 3 a.m., when people are hopefully sleeping, these machines are going to spin down and then spin back up for another purpose. So what are some other things that we're trying to tackle? Less human error comes with automation. So we want to give control over to the machines. Let's let the machines control everything because what's the worst thing that can happen, right? What could possibly go wrong? Siri, run quads, deep provision. Power. Mass, deep provision. Deep provision. Exit one. Exit one. Kill-9. There's still some bugs in the code. So we automate documentation so we don't have to worry about that piece. We have programmatic scheduling for all the provisioning, both on the network and the server side. And all the switch changes are done automatically. So I like Juniper switches, Cisco switches. I like networking just as much as the next guy, but I don't want to have a secondary job as a network engineer. So if it's repeatable, if it's something we can automate, then that's all part of it. What else do we solve besides the display going crazy here? I think the machines are taking that over too. We want to also maximize idle machine cycles. So in any sort of typical R&D environment, any sort of large-scale infrastructure, you're going to have machines that are running and they might not be doing something sometimes. They're Susan B. Scale Engineer probably works Monday through Friday. She's not going to be running tests over the weekend normally. So that hardware is still running in the data center. It's still using electricity. It's still costing money. It still has a carbon footprint. So we want to maximize these idle cycles of machines when they're not being used. And one of the ways that we can do this is again with this kind of programmatic scheduling that we can do in the future, that we can pick machines that aren't actively being used for workloads and spin them down and then spin them back up again to automatically participate in automated testing. And that's done with the YAML-based structure and it maximizes the efficiency of the hardware and basically we get all of our money. Execute! Oh, God! I don't know how that works. Yeah. So there's a couple other challenges that we want to solve. We want to be more like Airbnb and we want to be less like a hobo house where, you know, you have a wiki that someone edits and that's the authoritative source of everything and God help you if you do this on a spreadsheet somewhere. We want to be efficient. We want to have all the scheduling done ahead of time and we want to be as mechanical and surgical as possible. So we have clearly defined operating guidelines on the residency limit. So you can only have an allocation for up to four weeks. After those four weeks are done you need to put in another request. You need to get in a queue and it's going to depend on priority if you get a set of machines or not. So we've talked about the problems that we want to solve, why this set of tools exist and kind of our internal use case for it. So I want to dive a little bit into how it actually works on the back end. So I'm not going to win but this is basically the back end infrastructure at a very simplistic level. At the very top we have Milton from Office Space. Everyone's familiar with Milton with the red stapler? That's right. So we have Milton at the top of all this. He's kind of the user of the environment and we have the quads tool itself which does all the orchestration calls to other services to some provisioning back end. We choose Foreman but you could use something else. It will call to tools to go to the individual switches and reconfigure VLANs. It will go to the IPMI interfaces and if it says it's a new server it will create whatever users are supposed to exist on that IPMI. It will carve out raid disks. It will do all sorts of various stuff. And really the meat of this is the move host component which we'll get into in a little bit when all of this happens on a schedule. So when you have a set of machines and you're given a schedule to work on something, you get your own isolated area, you're guaranteed a certain level of performance. We usually have 40 gig or 100 gig networking and you have this carved out time to do whatever you want and you don't impact anyone else because the VLAN isolation, the network changes are all done as part of the entire provisioning life cycle. So in these systems we take that change and what machines are actually doing what and we update the documentation automatically. So there's also a wiki update process that happens. We use the XMLRPC, Python library and WordPress and we basically scrape Formin. We pull all the information out of Formin plus the information that Quads knows about that machine metadata. Who owns the machines? Is there a change? Is there an RT ticket associated with that certain temporal assignment? All that is mung together and then automatically posted on documentation for people to see. And then the last thing that happens is we have a set of applications that are useful for us as kind of shepherds or the pit crew of this environment to see the health of the machine to see what's happening. So we have Ansible Playbooks that will automatically add Nagios checks for the core things we care about. You know, CPU, memory, temperature, fans, disks, things that would not affect performance if we checked, but things that we need to know about to make sure that the overall healthy environment is good. We also add in a Grafana component so that we can graph over time the purport bandwidth usage of each server. So we know how idle it is or we know how busy it is. And this will factor into how we determine which machines are eligible for automatically spinning up over the weekend if they're idle and doing and performing activities. So we talked a little bit about the automatic wiki generation that we want to automate the creation of documentation and not ever have to edit that ourselves. And we also want it to be 100% correct because humanness makes mistakes. And if your documentation is wrong, that can lead to some serious issues across the board. You could wipe out someone else's machines, you could make you difficult to do your job. So having correct up-to-date documentation is absolutely paramount. So we have a pretty simple system. We make foreman the authoritative source for all of the infrastructure, for DNS, for host entries, some of the host level and switch level information is in foreman. So we only have one authoritative source as the true source of knowledge. We have this ancillary metadata that's in the AML substructure. We combine that together and then we turn it into markdown. And then we take that markdown format and we use the Python API and we keep a page up-to-date. So I'm going to kind of go through some examples of what it looks like. So this would be the main page. If you have worked in an environment that has a lot of servers, you typically have some place where you would keep rack and server information. There's a lot of tools to do this. There's RackMonkey, some people use a traditional Wiki. You know, you might use Confluence, whatever works for you. But we chose WordPress because the API is pretty good and it's easy to spin up and it's pretty lightweight. So this is the front page of kind of what would be your racks documentation. So you have your typical information and it's not going to change that much. The serial numbers, the MAC addresses, those aren't going to really change. And by the way, I did not mark out the serial number here on these. And these servers are actually out of support so if you would like to pay for the warranty for these, the serial number is up on the screen. So no one's going to get a one up for seeing that. You have other stuff like basic things but the one thing I want to point out here, this dynamic, which is going to update or it's always correct, is the workload. We see different cloud assignments. Cloud is a generic name for an environment or a subset or grouping of servers that we give to somebody. So that's going to update depending on what that machine is doing. And we also have an owner. So Russell Bryant owns these machines currently or he did at the time of this screenshot. And over time that will update. And the last part is a graph and this is a little outdated because there will be a link here that will go to Grafana which will have collect D statistics on every 10 gig, 100 gig and 40 gig port for that machine so we can see over time what the bandwidth utilization looks like. So drilling down a little more if we were to click on the assignments part of the page, like up here, assignments. You would drill down into kind of an overview of what are all the machines in the environment doing right now. What do they have? And who was the owner and what is there a ticket request associated with the set of machines. So we have an audit trail but everything's kind of linked in one place. The last thing I want to point out here and this pertains to OpenStack is who's familiar with Triple O or Ironic or Director? So we could seriously we have a few bugs as well. So yeah so we auto generate the instack inf .json so you don't need to put that together yourself. That's one extra step you don't have to do when you get handed a set of machines to deploy OpenStack on. This will auto generate this for you. It will actually omit the information for the undercloud and we leave that reprovisionable as you see fit but we go in and we massage the pixie order and disable the pixie on the form interface because Ironic likes to do its own management of pixie, of interface ordering and things like that. So we simply just prep the environment and automate as much as possible so when a new OpenStack environment is presented to somebody say they're doing mass API response time testing or they want to test multi-cell performance there's no end to the number of tests that have been done in the lab but everything is taken care of them so that the only time they spin is actually testing the product. They don't have to do the installation they don't have to munge with a json file and mess with the out-of-band. We try to anticipate and do all of that for people that are using the environment. So if we are going to drill into one of those individual cloud assignments so we'll pick one here, let's say cloud O2 we could go into it and it would also tell us when did the assignment start how much time is left on it and what was the total duration and as people want to see more information we'll hack on this and get it to show more info but the combination of all those things provides a level of transparency to people that are using the hardware you can very easily see what is the total capacity, what are other people doing what is usage looking like and how long do I have on my machines we also auto-generate any faulty system so if you're familiar with Formin there's a concept called host-level parameters and it's basically a piece of metadata that you can set that you can then execute operations against so if we have hardware that's faulty and we don't want it to participate in the larger pool of machines we simply set broken state true if a machine has broken state true it's no longer available for allocation and it actually shows up under a faulty systems category that we can then go back through and either fix or replace something that's wrong with it but we don't waste any time saying oh you can't do your test because the machine's broken we'll just swap it out with another one and then we'll get it fixed when we have time the other thing here is the unassigned systems, anyone can look in and see if there's any free system cool so there's another component to the auto-generation of the documentation which is more of the visual quality so we also generate a calendar view so you could load this into your work calendar you could go to it we happen to just generate this in like a PHP calendar file and an ICS file but at any point in time you can see what's happening in the environment now what happened three months before what's gonna happen three months later so this helps with planning multiple groups when used a subset of hardware that if you know what your testing schedule is if you know what your development schedule is you can schedule all this ahead of time so the hard work and the lifting is already done for you but you can plan your schedule around what's gonna be available there's no guesswork another visualization that we generate is more of like a heat map it will break it down by days of the month and then we have a per month view that we can scale out as far as we want so we can tell what is the usage pattern in the lab look like with one glance we can say oh there's this big gap of five days we have these machines that aren't being used that's an excellent target for automated testing of something so again it's steps to move towards being efficient as possible with scheduling of a large amount of infrastructure so I'm gonna get into some of the commands that quads operates on I don't want to get into too much detail because this is all documented on the github page but right now it's CLI only we have some plans to make it more service-oriented architecture based and have proper push notifications and API and all that but as you can see there's a lot of bugs to work out as you saw earlier and it's quite deadly so we're gonna start slow and we're gonna move up but if you're getting started for the first time okay you would first define the environments that you want to work on so each we just call it a generic name for a cloud but it doesn't actually have to be a cloud it could be jboss related it could be any sort of product scale and performance test related but we call it a cloud so you would define your isolated areas and then you would have network VLANs that correspond to the requirements of each of those work groups and then you would give it a description field and all this is driven by a Python CLI that we have the next thing you would do is define a host and then you would associate that host with any one of these cloud definitions that you created and there's other little commands you can then list what hosts are currently managed by quads the next part and really kind of where it starts to get interesting is when you add schedules to hosts hosts can have an unlimited amount of schedules as far in the future as you think there will be humans on the planet so this is an example command of adding a new schedule to an existing host and we're going to associate it with a certain workload or a certain cloud then you can kind of list the schedule so what I wanted to show here was you know we have five schedules associated with this one machine we have a default environment that when it doesn't have work to do it will actually spin down and power off and its vlan config will be changed and it will move to the available pile that we saw earlier on the wiki right now there's no availability for three months out but if there was nothing to do for this machine even for a couple of hours it would completely power off and move to this other spot to not run to not use electricity to kind of be efficient there but this host here for example C08H21 it's got five schedules we can see the schedules it had before we can see it's currently on schedule five and we can see that schedule five is only about a day so it was some specific test that it was a part of and then the next available one is going to be schedule four which starts the 6th of February and ends the 27th of February so you kind of have this record keeping that's in the metadata for each one of these hosts and that's kind of the crux of how it's managed and then you have summary commands you can list what's currently there and then the actual heavy lifting the actual provisioning the actual going to switches interacting with them changing the configs of the ports that each server is connected to that quads does not do that quads is just a facilitator the move host commands there's a move host flag that you then tie your provisioning back into so we provide scripts to go currently to just talk to juniper switches we also have delforce 10 switches and we're going to be doing the same thing with Cisco but if there's not a vendor that we have supported it'll be added later and one of the things that we want to move to is to actually support my opinion the proper way to do it which would be using open daylight or sdn or something to use a plugin architecture to talk to these resources but for right now we have tooling that goes to the switches and does the changes and this all happens with the move hosts there's some other auditing tools that we ship with quads the most useful one is find available so find available you will feed it the number of servers that you need how many days you need them and then optionally limit to the type of hardware so if you needed 10 servers and you wanted you know a del xd which has a bunch of disks it's usually ideal for sef and you wanted it for 30 days then this tool would go and inspect the metadata of each one of the servers that quads manages and then give you the next available time frame that meet your requirements and then spit out a list of which machines those would be so if you're an operator if you're scheduling these future workloads for people this makes it very easy for you to know what are the target machines that will be available I can just simply enter them in the CLI they will go out and then later on when the time comes they'll just spin up and they'll be passed off to somebody and then again everything is done out of a common configuration file so we have a quad GML file that these are in no way exhaustive these are just a few options that I've kind of handpicked to show for illustration purposes but everything that we tried to be very good about this is that everything is variableized you know if you want to have email notifications that's simply something you can turn on and off IRC notifications if you have SAPI bot or some other IRC bot you can have notifications for events, pump through those and announce on channels things like that so again it's not exhaustive but it's all on the github you can check it out at your leisure so what's currently working right now so the automated system and switch provisioning that's kind of at the heat of it IRC email notifications and we currently use RT for tickets so if there's a need for another ticket system to be supported as long as it has an API or has some way to interact with it that's always a possibility the IP my provisioning is currently in place and the calendar visualization the wiki page stuff that we've shown and talked about that's also available and we just most recently have kind of a full CI sandbox we use Garrett for code review so every patch that would go in runs through basically a quad sandbox where it tries every command it feeds it false information to see if it catches an error and there's always improvements there but before we didn't have any CI and you guys saw what happened earlier what are we working on now what are some new things that might be going into quads that are in progress we'd like to have a web interface on here I'm a big fan of Flask I think it's lightweight it does a great job of kind of doing that but we want to have a Flask interface in front of it and to a certain extent we'd like to get to the point where we can let people provision and schedule their own hardware especially if it's not a shared pool this might be useful for hardware but they want to use this to manage it then they obviously need an easier way or multiple ways to manage it besides a CLI we want to be a little more modular so you have to keep in mind we're sysadmins devops people we are not software engineers by trade so you'll see a lot of this was written from more of an operations perspective and less of a this is the correct etiquette and this is the right style we'll get to that later we want it to work because it saves us time so some of the more constructive feedback we've gotten is that it needs to be more modular that we need to use more of a plugin system one of the bigger places for this is the SDN part instead of having an ad hoc tool for every switchfender that switchfender might actually change their syntax we have to maintain that we would want to leverage some of the other awesome work that's out there in the open stack community for SDN for managing switches specifically open daylight and some of the other initiatives there so that's at the forefront of improvement the form and view integration that I talked about will go in really soon and we want to have a little bit better ironic support as well but right now we do a lot of stuff to make it very easy so when people get past a subset of machines and a network for open stack they just go they run the triple O stuff and it works or optionally we even run the triple O quick start stuff we run the director stuff so they actually have a fully deployed cloud when they get handed the machines they don't even have to wait for introspection they don't have to wait for the installation any of that stuff one of the other things that we've been asked is to support LACP optionally for the switch provisioning 10 gig nicks or 240 gig nicks participate in kind of an LACP bond configuration but then the next tenant might not want that so we need to have flexibility in supporting more network architecture there's just a couple other things that we also want to improve but for the most part it's about six months old and there's only two of us that work on it so we're trucking along that's really basically it now's the time for questions and yes right it's a good question so the question was what is the storage solution where does storage fit into this because we only talked about the compute side and the network side we leave that completely up to the tenants for open stack we typically we have a class of machines that are ideal for Cef they're typically the larger super micro machines with a ton of disks and we will deploy a director based open stack deployment and then install Cef separately and then usually marry the two other occasions we'll use heat to actually deploy a Cef component along with the open stack deployment but we don't do any long term retention of any of the result data unless it's asked for we offer and we run a several node elastic search cluster and so a lot of the data that gets pulled out of the test is directly in search into elastic search and then that's that's moved elsewhere to another location and then that's also safer offline use so we try to capture everything in elastic search and we try to get out of the business of okay this is like VAR data and then I have to remember to back that up before my machines go away I'm gonna lose it so we try to push people to use things like elastic search to push their data out of their environment so if the machines do go away there's nothing that's lost because the results are achieved and stored in real time or it's close to real time as we can get so I did want to give a plug to a performance and scale tool that we also develop on called Browbeat which is it does performance and scale testing of open stack and part of Browbeat is a set up a full elastic search log sash Kibana or elastic search fluent D Kibana stack for you and then has a mechanism with an elastic search driver to push all of your results directly into elastic search and then save it for later if there's significant interest to add you know storage then we could do that but generally it's up to the user to do that and if they request it will obviously help them for like a big disparity in hardware like a wide swath not really I can see where that would be useful but for our use cases especially as it pertains to performance and scale we try to keep uniform specs of a lot of nodes so we might have different classes like this is a compute class or this is going to be heavy on disk or IO so we try to have uniformity across the hardware but we try to have lots of types of uniformity so if there was specific workloads like for ARM or you know something that was not as common and there was enough of demand for it we absolutely would rack that hardware get it in place and get it integrated but one thing we don't do because it's kind of skews test results is you don't generally want to deploy like a multi node application or a stack across machines that have drastically different hardware configuration because it tends to kind of skew the results that you would get back so because at the crux of this is you know managing an environment that's rapidly provisioned all the time this is efficient as possible but the end result of it is to glean performance and scale data out of a random amount of applications or workloads so you could take this concept the meat of this concept is in the automatic scheduling for future things in the generation of the documentation and you could use it to manage a completely different infrastructure it didn't have to be R&D infrastructure it could be you know a rapidly provisioned or growing production infrastructure as we've seen today I would get some of the bugs fixed first but does that answer your question awesome anybody else so it's a great question and I got to answer for it we it works like this but the general idea is that if you have one authoritative source for everything or maybe one or two sources of information that that is the one true place to get it then you can scrape that together and then use something that's API friendly media wiki will do it WordPress will do it there's a ton of other ones that do it we chose WordPress and we're basically I can do the code to do it it's pretty simple but you create a page and the page has an ID so in your API call with the data you basically say post this information but reuse this page ID so it's the same page that people are viewing but as the information changes if there's something to change it will simply just update the parts that have changed and then it saves you the trouble of oh wait you know Bob Smith has these servers this week and Susan doesn't have them anymore I forgot to update the wiki and then everyone that looks at is operating on bad information so it's a fairly simplistic workflow of how it works but but we've had some good success with it yes right well if it's removed from foreman it'll disappear or if it is in a state that we don't want to categorize it so there's the broken hardware if it gets that parameter set for broken state it will not show up and be eligible but it will then show up in the list of broken servers or we have an exclude list so there's certain machines that you are going to have in foreman or whatever your provisioning back in is that they're utility servers that you don't want them to participate but you want them to still be managed so we have an exclude list that we add and those are automatically removed but to answer your question no it's not as efficient as it should be it basically regenerates the entire page and then it copies it in the copy action takes a couple of seconds maybe a couple milliseconds maybe one second max is really quick but the actual regeneration of the current information takes about 20 or 30 minutes but it doesn't copy it in until it's actually got a new updated copy of the page and then it copies it but there's room for improvement everywhere so this is definitely not the model of optimal software development it's what works for us and what we want to share with other people alright I think we're 45 minutes man we're doing pretty good for time anybody know any good jokes anybody well if you have any questions about this let me know I'll be happy to talk to you until you're sick of talking to me and you can check out all the code on redhat.hub under redhat-performance and you can find me at hobo.house that's where I put some information like I have a blog post just on using the documentation generation part of this not using anything else so my hope is that there's elements of this tool or these set of tools that this might not be a drop in replacement for someone else but there's maybe ideas or elements about it that people can reuse and ideally more than that maybe people can contribute back to that's the idea anyway thank you for your time folks