 Good afternoon, everyone. It's good to see you out here. It's good to see that after the afternoon Siesta you're here to hear about OpenStack Ansible and ironic. So if you wanted to know about how to make bare metal deployments easy This is the talk for you to be at This is the talk for you So, yeah, it's really great to see everyone. Well, there's a lot of people which is really really awesome And hopefully some of the stuff will tell you you'll find super useful And hopefully some of you can get involved afterwards as well. But just to introduce ourselves. My name's Andy McCray I'm the PTL for the OpenStack Ansible project for the Akata Cycle There's our Twitter details and email address and I see probably the best place to resource reach us is on the OpenStack Ansible ISE channel on FreeNode and my name is Michael Davies. I'm also a rackspace employee I've been with a company for about four years and I work from my home in Adelaide, Australia And again, we're really really happy for you to contact us if you have any questions after the presentation So a quick overview. What are we going to be talking about today? We want to tell you about the journey that we've undertaken to add ironic support to OpenStack Ansible And it was a bit of a journey for us. We had quite a few hurdles to overcome But not just the journey we want to tell you about today We also want to tell you about how you can try this out for yourself How you might want to get involved with the efforts that we've been doing and build upon that We want to tell you about a couple of the limitations that we faced and we also want to tell you about what we're planning to do next and Again, hopefully to get you involved in that So let's talk a little bit about OpenStack Ansible to start it off with So what OpenStack Ansible is is an Ansible based deployment of OpenStack It sounds pretty obvious really when you look at the title But we're not really in the business of coming up with clever names Probably because we're not particularly clever one of the differentiators is that The packages we use for OpenStack are all built from source So when you deploy Nova or Keystone or Glance, we take the GitShar We build a PipWheels Python package and you then install that package So there's no, you know going to a vendor for packages or or anything like that And we think that allows us to to give you the code that the developers wrote the way they they intended it to be used And and in order to ensure that you keep versions in in check and and with the same versions We actually have a repository server that these packages then get uploaded to so you can be sure if you were to deploy another Nova compute host for example, you'll get the same packages that are on your other compute hosts We utilize Alexi containers and Python virtual environments We do this to separate out the packages their dependencies the various services that are set up within the environments And it's just a good way to enable us to do some clever things around upgrades some clever things around scaling And just various other things for segregating out like dependencies And our main aim for the project is production deployments We don't want to deploy the new shiny thing that might not work or might break We want the things that you know are going to work and we want to know we want you to know that when you use OpenStack Ansible you can get a production deployment going It's actually for that reason the basis of the rack space private cloud powered by OpenStack offering that we have the RPCO offering But I do want to make it clear. It's not a it's not a rack space only thing We've got a really large community that started to grow It's becoming more and more diverse every day And if anything the rack space involvement has slowed down a bit like we when it started it was entirely rack space But we now have developers and contributors that have come in from various universities across the world and as well as some really big Corporates that like I don't want to name but you know, they're all involved and We've had a really great feedback and we've had a really great input and and various people just adding new things to the project every day So now you know why OpenStack Ansible now the question is why ironic? Why do you want to use ironic with OpenStack Ansible? And it really comes down to the the basis of why you would you want to use ironic in the first place It's enabling the use of hardware specific features You see when you deploy your node and you have virtualization You may not have access to everything on the node that you'd have if you're just running straight onto the bare metal For example has networking you may not may have some networking capabilities that you lose through the virtualization things like GPUs if you're doing Bitcoin mining or perhaps you Wanting to do video rendering you want to use ironic to deploy an image straight onto the hardware so that you can make use of these capabilities Trusted computing modules if that's important You're going to want to use ironic to keep away from the virtualization that you might lose Through that abstraction might lose access to the things that you want and of course the last one There is performance needs if you're trying to squeeze the last drop of performance out of your hardware Perhaps ironic is what you want to use The other thing is that adding ironic in to OpenStack Ansible It helps us to complete the deployment landscape You see when you think about OpenStack most people think about virtual machines you talk about hypervisors and Launching VMs and it doesn't matter whether you're talking about KVM or Zen or any of the other hypervisors that are supported VMs that's what we think about typically when we think about OpenStack, but now we're We've heard this morning in yesterday about how containers are now the next shiny thing And so containers are great as well But you throw bare metal into the mix and all of a sudden you've got this need for a computing resource and you can have virtual machines through to containers through to bare metal and Depending on what your use case is you can choose the one that's right for you And the good thing is is that sitting behind the one OpenStack API so you can choose whatever you want And and weigh up the whole performance versus cost needs The other point I've got on there is the under cloud and One thing that we're finding is that sometimes Organizations have got a large investment in applications that they want to run on top of say something like Kubernetes or Hadoop and big data and they just want to run on top of a node They don't want to use necessarily the the Kubernetes implementation in OpenStack They just want to run their application on top of some nodes and they really want to make use of OpenStack as an I double AS platform the infrastructure as a service So by adding in ironic support to OpenStack Ansible You can deploy all of those nodes and then you can run your particular workload on top of that Where it makes sense for you and just to expand on those use cases a little bit further If you're running like a high-performance database and you're doing raw IO and you want to remove as much abstraction as you can So you're getting the best database performance. Maybe ironic is the thing that you want to use Perhaps the idea of having a single tenant hardware there is important to you Maybe for security reasons or regulatory reasons You don't want other virtual machines on that same box with that where your compute platform is running You want to have that box all to yourself And of course the other ones there Something that I'm seeing a lot of is a huge demand for Hadoop and big data And so ironic is a way that we can deploy those things quite successfully Well, you've probably heard enough from me babbling on about why you want ironic and from Andy about why you want OpenStack Ansible ironic So let's just get straight down to it. Now. How do I get this thing running today? And here's just a few command lines showing just how simple it is to enable OpenStack ironic in OpenStack Ansible These commands are to set up an all-in-one and for those not familiar with OpenStack Ansible I just need to just to set the scene of what this is. This is a Deployment of OpenStack running in containers on a single node It's all of your OpenStack services on a single node And this isn't something that you would want to do in a production environment, right? Just want to make that really clear don't go home and do this for your You know fortune 500 company. This is this is not the way you deploy OpenStack for a large company Right, but if you're a developer and you want to do some development and test This is perfect for you to try out ironic and to try out the different ironic different OpenStack services So really it is as simple as cloning the git repository for OpenStack Ansible Then I've got a very nasty said command line there on line for but all that's doing is adding in The ironic dot yaml dot aio file to the bootstrap aio dot yaml file And that's just going to enable the creation of some LXC containers for the ironic software to run in After we do that, we just bootstrap Ansible We then configure Nova by simply adding that one line there to saying that the Nova vert type is ironic And what that will do will tell Nova not to set up KVM or some other hypervisor, but to use ironic instead So you add that into user variables and after you do that it simply is Easy as telling OpenStack Ansible go away and set up everything and you come back a little bit later And you've got a whole OpenStack deployment ready to go So what does this give you? This actually gives you It gives you a whole bunch of containers running OpenStack services all separated out it Configures up the network for you. It configures up pixie boot. It configures up DHCP And here's just a few commands just to verify that so we can do an LXC LS to list the containers We can see that it's created an ironic API container and also an ironic conductor container We can see there and then the next thing down if you look down the lines eight onwards What it's doing is you're attaching to the utility container and after you've attached to that You can source your credentials and you can do things like doing an ironic driver list and what that's actually doing is proving that the The ironic API the ironic conductor the database the rabid MQ is all configured up and working So it's just a way of verifying that it's all working further to that Some of the things that it's doing as I mentioned just a moment ago I've already said most of these things but it configures up at the ironic API to sit behind mod whiskey It sets up neutron for DHCP. It gets glance and swift ready to store both the user image and deployment image and It will set up your Galera database cluster So that it's ready to go with ironic as well as rabid MQ Of course when you're talking about setting this up in such a simple way there are some choices and limitations that have That come with that and so one thing that we did is that we made use of the agent APMI agent a IPMI tool driver for ironic Now the reason why we chose that particular driver and see ironic supports a large number of drivers Which which are hardware specific, but we chose the the lowest common denominator version and The reason we did that is because that allows us to use IPMI to do power control and allows us to pixie boot as well You can have drivers that are very hardware specific So if you've got Dell equipment you can use a driver that interfaces to the drac If you've got HP equipment you can get one that interfaces to I low but for our first version of this in the OS Ironic role we've just configured it to use agent IPMI tool One of the limitations we have is regarding single tenancy and this is actually an artifact of where things were upstream in the projects we're using both in In ironic and in Nova and in neutron you see When we were developing this we're we're doing it in the Liberty release And we didn't have multi-tenant networking at that time and so we didn't have the ability to do Network separation and what this means is that? Really for a secure from for a security reasons, you would only want to Have single tenancy deployments that is you only want to deploy ironic nodes for a single customer or a single project and The reason for that is because otherwise you have these multiple nodes and if different cut different customers are all using them Simultaneously they're on the same network segment and that's not great from a security perspective So that was a limitation in the upstream projects But I'm happy to say they're going forward that's that's changed and so we really will reincorporate that into this The other thing is is that because it was single tenancy. We didn't set up the cleaning network That's now a work item that we're going to have to do in the future The other thing they are the middle column there talking about hardware, you know Writing software that interfaces to hardware presents its own set of challenges Hardware by nature is actually unreliable. It's flaky things fail unexpectedly So what you do is you you know Andy and I would write some code with attempt to deploy hardware We had attempts deploying to a node and then it would fail now Why did it fail did it fail because a mistake in the code that we wrote? It was it because there was a bug in some of the other software that we're consuming or was it because there was a problem with the Discontroller or there was a temporary network outage or something like that so It's a bit of a round trip right you then go away and you make more changes and you try it again Or you sit spend some time debugging through that Hardware is typically unreliable And it's not just that it's also even software is unreliable. We came across some problems where you couldn't Couldn't attach but it couldn't talk between several nodes and it came down to it was a network MTU problem Neutron was flipping our MTV MTU value on us and So what this really does is it adds to the deployment cycle time? It takes it takes a long loop to start again and redeploy to hardware again You see when you deployed a hardware you You turn the machine on it does a post you then do a you do a DHCP request you get back an IP address You start the pixie boot project a process you then download an image for the deployment You then that image running there on the machine will then download its only image and then write it to disk And that might take a while to write to disk and then it will reboot again And then you see whether it actually works and of course if it didn't work Start again, and of course that process might take five minutes or so or even longer And so it's a lot slower process as compared to me to maybe say developing to VMs We also found that Very small differences in hardware can actually trip you up when you're developing software as well And what it meant is that when you do this kind of testing it's manual testing You're manually deploy manually running some software and deploying and seeing what happens It's not automated So when you're developing say some other open-stack software You're able to test very quickly using VMs, and it's just software based only but when you're doing it with hardware It's a manual testing process And so the corollary to that is that you can't set things up without a fair bit of effort So it happens automatically you know if I was writing some software that was didn't touch hardware I could just every time I make a change I could run the set of unit tests verify it works and I'm all happy and I'm happy with that When you're coming to deploy with hardware, it's a bit of a longer time cycle before you can get those things back And of course that has consequences when it comes to gate testing once well This is working and I've checked it into the repository and I'm making it available for others to use It really is important to be able to test that end-to-end, but it's hard to do that Without having hardware available to do gate testing So we got to the bit where we now needed to implement ironic in open-stack ansible and we were starting our journey off and Michael had actually started it already by himself and was trying to get it implemented any reached out for help from the from that open-stack Ansible team It was a bit of a difficult time in open-stack ansible to be implementing a new role It was mostly due to the fact that in the Liberty cycle We had one massive repository called open-stack ansible where all the roles lived But as we went to implement this role It was the metacocyclin we'd started moving roles into their own repositories This meant that we could no longer put the ironic role in its own repository It had it or rather we couldn't put the ironic role in the open-stack ansible repository. We had to get its own repository Which is a great idea and and the the move has now happened And it was the right decision and it now works a lot better But it meant that we were innovating a lot of the things that happened because no new roles had been implemented as their own roles We'd come in in the morning and and Michael would have already set up some of the kind of database services and the keystone services and the way in Which the project wanted them to be done would have changed So we spent a lot of time spinning our wheels on things that were actually not related to ironic at all They were related to how the project works and how how that should happen With that came the dependencies between the open-stack ansible repository and the ironic role itself and more importantly the open-stack Roles and the over-stack ironic role repository. So for example, ironic is quite special in that it needs quite a few dependencies It requires nova keystone and glance and then it also requires Swift on top of that to do a temp URL Which is where it will put a temporary URL for the image that it's going to get from glance So we had to somehow figure out how we were going to namespace variables Which hadn't been done yet within the project all the at the same time trying to implement Ironic in in a useful way and so it really put blockers up that That if they hadn't been there, I think we could have got through this a bit quicker And then the last point was that since we'd now moved to individual roles We needed individual role testing and as Michael said before it's quite hard to to get Ironic testing if you don't have specific hardware to hardware to test against And at that point in in the ironic life cycle even ironic upstream didn't have that So we had to settle for some API tests Which is not a great test of functionality a test that you've deployed it and a test that the API is responding correctly But it doesn't really test that all the connectors between The bit where you would make the API call and and where the host that you would like to be spun up actually happens So we had a couple of challenges around that So the technical challenges that we faced so like I said, we had to split out a new role We had to try implement a new role that we were struggling with some of the hardware challenges that that Michael had talked about But we also had to refactor it all the time So we're refactoring the role and and trying to fit into a new system that isn't quite there yet and then Deal with all the changes that are going on within the project So that was pretty difficult and then the physical hardware requirements It's really important for us like since the project got implemented since the opposite ironic role got implemented We've had a couple bugs that have come in that could easily have been avoided if we had a legitimate gating on Some kind of hardware or at least a faked hardware And at the time in the in the Liberty cycle that that didn't exist And so so we had that kind of problem to contend with There are also some pretty big non technical challenges which are interesting For starters, I'm I've been working on deployment projects for the last four or so years I was part of the opensack chef project for a while and we had our own deployment tool set before that And then I moved on to do opensack ansible. So I've been doing deployments a lot, but My knowledge of ironic Extended as far as knowing what it is and kind of how it works, but not very much past that so kind of understand how the networks all fit together and how all the kind of Various pieces of the puzzle fit together was was a little bit difficult for me And my challenge was is that I didn't know Ansible at all And I had no experience with the opensack ansible project either. So But on the other side, you know, I was very familiar with ironic and so we had this this thing where Andy had half of the puzzle And I had the other half of the puzzle And there was some more challenges as well Yeah, I mean when we look at that kind of challenge It's interesting because like I said Michael had actually started trying to do it and ansible before I even came along And and some of the stuff he done he'd rewritten the way we did databases He'd rewritten a rabid MQ server and and for someone it seems so simple for someone who works on the project every day So just be like we've got a role for that just run the role and there's your database Like you don't have to do anything But as far as he was concerned he was being told to make this you know split out project We were doing role split on needed to run as its own kind of thing So obviously ironic needs a database. So I should run the database in the ironic role So it was that kind of thing where we managed to speed it up really quickly as soon as as soon as we both got on board And and working together and it was interesting because as Michael said he's he's from Australia and I'm based in the UK So there's a roughly a 12 hour time gap and at first it was quite hard So there are a lot of late-night meetings and early morning meetings for both of us and to get a kind of good cadence going was quite difficult But by the end of it. We'd actually got it done quite well I'd have a meeting at just before I went to bed to tell Michael where I was at And what I thought we should do next and what needed to be done and in the morning I'd get up and stuff would have been done And I'd now have to look at what needed to be moved along from there And it actually worked really well once you kind of embraced the fact that you're not going to be working at the same time And you're not going to have the ability to just you know do the same things at the same time I mean and in the end of the day we actually took advantage of that I'd say rather than I'm let it slow us down So the hardware lab availability We've mentioned like the requirement for hardware to test and to install stuff But one of the non-technical things about that is that Getting hardware is a slow process if you go to your organization and say we need 10 servers to deploy things on It typically takes a long time even at the best organizations It takes time to have to get the service from somewhere and rack them in a data center and do all those things that are Pretty normal, but the route the lead time is is quite high With ironic it's slightly worse for us at least because it wasn't a standard configuration in ironic You need your IPMI devices so the Drax or the ILOs or whatever else you have in your hardware to to sit on a network that can be Connected to from the ironic services and normally we segregate those out so that you can't connect to them Unless you're on a specific network for security reasons So it meant that it was like a snowflake configuration that we're now asking for I mean it took quite a long time to get Meaningful hardware and in the meantime we were trying to run around and get you know special hardware set up on the side like on the sly and and so that we can just get something going and test things and That was possibly one of the most frustrating things we had and all the while this is happening We've got customers that want this now So they want what we're trying to do and we've got time constraints We can't you know sit around waiting for two months doing nothing while the hardware is being deployed somewhere It's it's it's challenging. It's a challenge that you have to overcome in a lot of projects and Fortunately, we had it had a couple of workarounds and we were able to to get stuff done But it's it's definitely something that you need to be aware of when you come across these things and Then the last one technical challenge is around when we went to deploy So I already mentioned that we started splitting out the roles and open stack Ansible which was an important task and it was really great, but We had our stable deployment for customers running on Liberty which would have been in the time frame when we still had The one big repo now implementing a new project and open stack Ansible would count as a feature So we can't backport it to Liberty and even if we could backport it to Liberty We'd have to somehow take the single role and get it either pushed into the main role Or basically we weren't too sure how to do this like we've got a customer We need to have a supportable release, but we have to build this thing on master So how do we challenge? How do we tackle that? So we decided to do both we started off with building building the open stack Ironic role on mataka with the split-out role repose and we got everything working there And then once we had it working we used a bit of glue and created a Branch for Liberty for ironic that would then integrate with the one repository and open stack Ansible so it took a little bit of a little bit of work and a little bit of Outside the box thinking around getting getting the variables to play nicely together But at the end of the day we managed to build the build the project for but not with Liberty And of course now I'm much better at doing git mergers So one of the key points is that we needed to add a new role to open stack Ansible and it's it's really important to me That that this becomes an easy process. We obviously faced quite a few hurdles It was mostly around timing I would say if we'd done the project a cycle earlier or a cycle later We wouldn't have had the same issues But I'd like it to be in a state where that doesn't happen and where a new project can come on board an open Stack and say hey open stack Ansible is a cool project We should have an automated deployments in that project And so we've put a lot of effort and we're going to continue to put a lot of efforts into making that easy for For projects to come on and and get involved so one of the things we've done is we've Made a standardized configuration around how you set up your central services So it's now more clear that when I set up my database when I set up my Keystone users and endpoints and when I set up my rabbit mqqs That they that happens in a certain way and you can look at all the other roles and they all do it the same way And so having a standard is really important because you're not just you know running around trying to figure it out yourself It's already there. Don't worry about it worry about deploying your own stuff And then the centralized testing repo so we we've created a centralized testing repo Which basically contains a whole bunch of playbooks that deploy the very servers you would need so for example if you're developing a new role and you want to add its open stack Ansible and your role relies on Keystone or glance on over you don't have to think about how you do that You don't have to write your own little playbook to use the keystone role You can just use the existing one which also comes with a bunch of pre Populated variables based on the inventory that you specify and it'll just deploy it for you So it's as simple as including the playbook that does the install keystone on install glance and away you go And then you can focus on your role only and not worry about the rest of it And so yeah, it's a lot about you know creating clear standards We'd like to add more documentation and and you know some things around what we expect from a new role Like what we want to see from a new role when you come in we have to see Testing we have to see this we have to see that and whatever the requirements we decide upon But we've already started to create the standardized approach to how you would do things Cool, so that brings us to you know, I'm sorry. We've had to Share some warts with you some of the problems that we faced but now where do we go from here? What are our next steps? So things that we need to do so something that we've been working on is getting Virtual lab gating working So what that means is the ironic project makes use of something called virtual BMC which allows you to use This little bit of software virtual BMC To power control QM you VM images and make it appear like hardware So it means that ironic doesn't need to change it can just send IP my commands And what we can do then is we can power cycle and we bring up a QM you virtual machine image We can then if we set up our networks correctly we can pixie boot that image And so what it allows us to do is we have pretend hardware So there's these problems that I talked about earlier About how we needed to have hardware to do this thing and how we would like this to be automated That's getting much closer to us. I will say we're not quite there yet But you know, it's it's only this much. It's only that much more to go and then we'll be set The good news is is that the the ironic project itself is starting to do that for gating and so we're We've got someone's already forged the path ahead in front of us Once we have that of course the real benefit is is we can have that set up in our in our gate in the open-stack Ansible ironic role such that any future changes will have to go through that proper gate test So it'll stop any regressions coming in hopefully It'll also allow us to do refactoring and to and to make other changes as well to make sure that everything still holds together how it should The other thing that we want to do is that we want to make things easier for operators You see what we've got today is an ironic role Which is quite good for if you if you're prepared to have and an operator be handheld or perhaps a very technically Clever operator to use this they can go and use that today and that's fine But if we want wider adoption, we really need to make it easier for an operator to make use of Part of that is is simply making it easier to enable the ironic role Another thing is to improve the documentation. We've got some great documentation people involved in the project, but what we need to do is Document how to set up networks because ironic is quite special. We have networks for power control the IPMI We have a provisioning network for pixie We need a network for cleaning and cleaning is about Since you've provisioned a node and you've handed that over to someone with an image on it when they hand it back You want to go through a process of cleaning that node and making sure it's ready for redeployment again So we need to make sure that the cleaning network is set up So there really is a and especially if you're talking about a multi nodes set up Not just a little develop and test network. There is a bit of configuration that's needed So we need to improve the documentation to make it easy for operators The other thing we need to do is that because we were doing this to a deadline and doing this as a First cut there are a number of things that we left out. We left out the the web UI We left out ironic inspector. These are things that are really needed to add back in These are things that are ironic provides today, but we don't have as part of this role The other couple of things is again making it easy for operators and part of that's got to do with node enrollment You see when you want to Deploy to a physical node. There are things that you need about that node that you need to tell Ironic about things like the MAC address that you want to pixie boot on about the IPMI credentials these things need to be and you Manually you can enter those in via a CLI you can go ironic node create and you can add all those on But and that's fine for a little testing and development and you know We're able to do that when we're testing with you know Half a dozen nodes, but if you're starting to use this to deploy a thousand nodes or 10,000 nodes Obviously, you don't want to be typing in those command lines. It'd be far better to have ansible playbook support to enroll large numbers of nodes Likewise hardware is flaky. I think you probably see that there's a bit of a theme here Things go wrong and so when things go wrong Ironic has the idea of a rescue mode and we want to have a play back playbook support to be able to help Boot that node into a rescued rescue image to help Diagnose what the problem is and so then we can get that node up and running again And of course, there's a whole bunch of other features that ironic has that are not essential But are very useful and there's a few of them listed on the screen there serial console Rather than actually physically having to go to a console to see that over over the wire IPXC root device hinting Config drive partition images all kinds of other things that are things that ironic Enables are enabled there, but we haven't got variables for inside of the ansible role to include as well The other thing that we want to do is we want to keep up with ironic You see ironic is very active project There's a lot of things happening upstream and we want to make sure that we incorporate those changes and benefit from those things Is part of the ironic role and open-stack ansible the first one there is nova multi-compute hosts You see at the moment, there's a there's a single point of failure between nova and ironic That interface needs improving To improve the resiliency now that there's some great work being done right now in this area There's some that's happened in the cycle that we've just finished and some happening in this new cycle But we want to improve the ha of that interface and so as soon as that happens We want to incorporate that in this work here Obviously that will make it easier for operators and the next one there is Nova cells v2 And the reason why we want Nova cells v2 is to make it easy to mix both virtual machines Containers and bare metal nodes is part of the one open-stack deployment again behind that one open-stack API Now it's not actually we've got a colleague Kevin Carter And he's actually doing work at the moment in this space to prove that you don't have to do it within Nova cells But either we go that way or whether we use Nova cells v2 The reality is is that we just want to be able to deploy virtual machine Instances and bare metal nodes together in support of the one open-stack install We want there's been some great improvements in ironic in serial console support in cleaning in multi-tenant networking like I discussed earlier We want to have these things incorporated as they become available in ironic And of course that will be much easier to do once we have a virtual gate So that really brings us to this this slide here looking forward and moving looking back and moving forward So this is the this is the console from the DeLorean in back to the future So hopefully in this talk you've seen the journey that we started about two guys who who needed to work together With ironic and open-stack answer to bring it together to add this ironic role You've seen that we've had some problems, but we've now got this thing and it's working But the reality is is there's still lots more to do as I've shown in the last couple of slides What we'd like to see is we'd like to see some operators make use of open-stack answerable ironic to deploy and give us some real-world Feedback, maybe that's you here today Maybe you can give that a try in a network and and let us know how that goes Or maybe you're a developer and you can say hey, I can I'm quite happy to add in some of those new features That you talked about so it's in some ways It's a bit of a call to arms, you know, we we you know patches welcome We would like your involvement to help us make this something that people really want to use and on that note Tomorrow morning, there's actually a fishbowl for the open-stack answerable Team and and it's around New projects that may want to get involved and you developers who want to get involved in the project So if you have any feedback or if there's anything you'd like to know or anything you want to you know Tell us please come along We're pretty pretty friendly mostly They're just some attributions for the photos that you've seen that brings us to some questions that we're quite happy to take from you now And I just finally I'll put this slide up It's just a QR code which is a link to the slides and things we talked about today So if you want more information go to that URL or grab the QR code. Otherwise, we'll take any questions from you Do you deploy the TFTP HTTP servers on conductors also in separate containers? So what we do is Sorry repeat Yeah, well in the the TFTP runs in the ironic conductor container at least At least the at least the version have here for the all-in-one When you start to scale that out we can move that around Thank you very much everyone. Thanks very much You