 Scott and Jamie, thanks again and take it away. Okay, cool. Yeah, hi everyone, thank you for your time and coming today. So just a quick introduction. So I'm Scott, I'm a Cal Engineering Research, my primary focus is working on OpenStack and yeah, ironic. Jamie? Yeah, hi, Jamie Paul. So I'm a manager of a team here called Compute Platform Engineering and we're responsible for all things Kubernetes and our batch compute farms. So just a little bit on G research. So we're a fintech company based in London. We run and build large distributed research platforms for our quantitative researchers, and we're in this migration period from HD condo on windows over to the nets and Kubernetes that runs on top of OpenStack. So I'll dive a little bit into our ironic. So OpenStack, ironic, what is it? See, this is the bare metal six. I'm not going to teach you to suck eggs, but what is it? So ironic is an integrated OpenStack service which aims to provision bare metal machines instead of virtual machines. And ironic also supports vendor specific plugins which implement additional functionalities such as moving between machines between different networks. And then an ironic machine has four main states. So this is enrolling, enrolling, cleaning, holding and provisioning. So this is where this is purely like for us. This is how ironic works within G research. So ironic uses IPMI and Pixie and a RAM disk image, which is a microkernel in order to turn on and off the machines and then that will do that at various different stages of the build. And then Neutral will move the server to different networks using the network and generic switch plugin. We use that quite extensively. And then when a bare metal machine is deleted by the user, it's cleaned and then it's returned into available pool for reuse. And then we have this kind of strategy to kind of build everything, rebuild everything once a month. It's run about every 28 days. And then the servers will go into the farm and they'll be used, drained and it back cleaned and then they become available again for someone to use. So a really high level this is kind of the architecture so on the left there you've got Color Ansible and Jenkins which orchestrates the deployment of ironic. So that sort of fans out into lots of different conductor groups. So we make quite quite big use of conductor groups within GR, and we retire them to things like availability zones and that kind of thing. So it gives us quite a nice scalable way we have up to about 1000 machines on it on a single well on a pair of conductors. And so in a data center if you had three AZs you generally have like three sets of conductors, and then each one would look after an easy, an easy set of machines and then if that was a scale out more than sort of 1000 machines and it's quite easy for us to just spin up more conductor groups and then we can tie resource classes over to those conductor groups and then, yeah, keeps everything sort of ticking along. So, in order to get machines into ironic to have to go through this enrollment phase, and this is all orchestrated using Ansible, using basically called custom playboots within Kyobi. So they go, first of all, they go through this pre inspection. So we create a record of the machine in the ironic API. So the resource class, we applied some baseline fire settings and then finally some baseline I don't conflict. Then what we do is we switch machine on we go through ironic inspection. So this allows us to turn machine on discover what's there. Check for any cabling issues and then identify where is plugged in on the switch. And then we can go and create a port within ironic to actually represent that node. So this fits quite important what we where we kind of have to use inspection rules here to identify where the servers plugged in. And then that helps us to know that there's not any cabling issues or anything like because if we come back and we have an issue and we need to troubleshoot it with the network and team, we need to make sure that our assumptions are plugged in correct because they're going to be setting up a TCP dump to whatever on on certain switch ports and we want to make sure that that's all correct and we're actually doing things as expected. So now that we've gone for inspection. We have enough information in neutron to go out to move the machine. So it will first be on like a holding or provision VLAN which will do the inspection and then neutral then move over to cleaning. And then we go for a full cleaning cycle. And what that does is updates the firmware. Fair a lot verifies our settings and basically make sure things I've been tampered with. And then we set the hardware clocks configure raid, what the hard disks and then we can go over things like checking GPU health and it's quite extensible you can do as much as you want really that I don't think it's probably not full list of everything it does but yeah it's pretty extensive we can do lots of different things in there. Once it's gone through a clean, a full clean cycle we know that the machine for its inspection rules is what we expected and it works because we cleaned it or works up to a clean. Then what we can do is we can run to burn in tests. So there were burning like the memory CPU, the disks, just basically make sure that the hardware kind of works under a bit of pressure, because we kind of go for the process of trying to do as well as quickly as possible because it kind of was the further you go down the line through the pipeline, the more kind of expensive in terms of engineering effort is to get the machine out of the way. If there is a fault with it so running the burning tests kind of a good way to do that. And then finally we go and create an instance onto the onto the server. Well, if we run burning tests and we put an SNS and we run some programs why do the instance. Well, it's kind of, it's a final test and what it does is actually moves the server onto like a tenant VLAN we provide the provider VLANs and that's where the user will actually go and select where they want the machine to be placed. So everything we've done up to now has been on the, the provisioning and the holding and those kind of VLANs where it will, it's final step is to move actually what a user will see. And then that helps us just iron out any last little issues that we might have. And also, with networking generic switch, most of the things like I've done on the, on just the A side of the switch. It creates a pixie port, and that's just on that's all done on the A side so creating an instance we get, we get a full bond, and then that allows us to basically just verify everything's happy. So that server will just come up, and then we wait for Nordics port to come up on there and then we kind of say, yep, you're healthy. Then we just trash the machine, then neutral and all go and destroy it, send it for a clean, and then it will put it on to the holding state holding VLAN, and then that becomes available for the end users to to consume. So then we move on to like a deployment sort of phase so this is kind of representation of what a user will see. So on the left there, on the top left you've got like a human that represents Jamie, in this case. So here make a pull request into kit, and then here specify things like flavors, networks, az images, and then that will move over to the right there, go through Nova into ironic and found out the services of plants and neutron to move the ports and then yeah out the system to bear metal node. So if we just go through that step by step. So the user request the bear metal machine by terraform, and then the flavor that they select maps to the resource class, and then the network and availability zones maps to a location and data center where the images where the server is going to be placed. So this is kind of important because the resource class basically represents like a type of server. And then so you can say I want a GPU server of type X and then that will that'll be represented in a flavor that that Jamie would have access to, or whoever the customer is. And then the we have quite a larger state so you don't want the machine to send it up just in a round and rack. So you can basically say, I want it to be in this availability zone, and on this network, and then that will tell the schedule exactly what needs to be placed. So then, yeah, the placement service will pick and pick a note from the available pool, neutral and move it to the provisioning the land, and then we go through the machine provisioning phase. So what happens here is ironic conductor turn the machine on using IPMI picks you get into the deploy around this applies some custom biosettings using deploy steps that will turn things up hyper threading on or off or any kind of custom settings that the user might have. And then we pull the images, the user's image down from glance, and then neutral and remove the machine over to the V line that James selected or the user selected, and then the service restarted. Then once that comes out that will be then into the OS of the selected one from the user. However, as we have everyone has been a machine, and they're all happy. And then that's not where the story ends though. So, at the end of the 30 days, the user will delete the server or in terraform you're going to take the resource. And then neutral move the server back to the cleaning the land will go through that full cleaning cycle, and then neutral moves it to the holding the land that becomes available again. And then that means that when machines are coming in from enrollment and being handed back the kind of end state when they go into available they kind of look exactly the same. So there I can hand over to Jamie to talk a little bit about kubernetes that runs on top of ironic. Thanks yeah so I'll take it from here this will be more around how we use these ironic servers within kubernetes and then a little bit about what we actually use those for. So, yeah, we have a process for cluster bootstrap where we build a minimal kubernetes cluster. So we have a bunch of terraform configured in GitHub terraform configuration which basically is used to instruct open stack to create a collection of machines in this case bare metal ironic machines. And they are brought online with a very minimal kubernetes configuration applied via ignition. This is because we use the flat car operating system and that's its way of bootstrapping itself. If you move to the next slide. At that point we have a sort of vanilla kubernetes cluster with a selection of nodes maybe relatively small number to start with. What we then do is apply our full kubernetes configuration via Jenkins pipeline. So at the moment this is quite sort of push based where we basically have our. I guess our cluster management system which is aware of what kubernetes clusters we have in the organization and what various settings they should have. So it goes ahead and applies effectively a whole bunch of yamal which applies to different components which we put on our kubernetes clusters to make them look and feel like our users are expecting so things such as certain manager ingress controllers all that kind of stuff which just makes them sort of, I suppose, gr if I'd also note here where we've got a migration at the moment to move away from this sort of push based model being driven by Jenkins and more to sort of pool based constantly reconciling reconciling model using our go or CD in fact, and even move on. But what we typically use all of this for in fact is to build a whole bunch of kubernetes clusters which we then put our application called our mother on top of. So the reason we're actually using all this bare metal compute is because as a company we do a lot of research our research is basically run a large amount of run to completion batch jobs. So the highest performance they can get using getting the most out of the hardware that we have. So to do that we provision kubernetes clusters on bare metal, make all the resources available to people and then enable them to use it through this software called our mother. So our mother which I've done different presentations about I don't know if any of you have seen it possibly is effectively a system, which allows us to schedule workloads across multiple. So many kubernetes clusters think 10s maybe even hundreds users effectively submit a job specification saying I'd like to run this job it needs this many CPU maybe this many GPU this much ran this much disk access to certain data. They submitted the amada API, and then the amada application is responsible for effectively turning that into a pod which runs on a kubernetes cluster or a collection of pods, which runs on them kubernetes clusters under under the covers. Thank you, Jay, but link in the chat for more about our mother, and then just here around what we're now seeing from bare metal so I'd say is still early days in the main we've been historically mostly using virtualization but now we're moving to bare metal as a sort of de facto baseline to put something on top of what we have seen so far is generally increased stability, especially for GPU intensive workloads. We had had a bunch of problems which ended up transpiring to be related to confusion within the virtualization layer I guess you can summarize as we're definitely seeing better input between nodes on the network level and with external resources. I guess I know it's kind of possible to do a lot of tuning within VM, but it's certainly just a lot easier just to eliminate the extra layer and just go direct to the resources you need for us. It's allowed BGP peering to be a lot easier as well. Generally, we have a bit of background noise there for some better been packing for us we tend to have large nodes with bare metal it just means we have less fragmentation of our state and as a result as well simpler state management we have fewer layers between our workloads and our hardware. Next slide please. And then limitations because it's obviously not all good news there's some things which we, you know, aren't as good I suppose on bare metal so certainly you get slower provisioning time I suppose that's not unexpected you're actually booting a real computer as opposed to just spinning up a virtual machine or a container. For us now we need a little bit more precise quota management we can't be quite as fast and loose with packing VMs on things or maybe over subscribing CPU. We're slightly less flexible in some ways there are some features of virtualization which we don't have with them at which we need to implement other ways such as snapshotting VMs for example. So we found it can be tricky for us to mix and match VMs and bare metal so we've just found it's easier just to start from scratch and just build new clusters on the also to have new architecture with bare metal and gradually deprecate virtual ones. So yeah summary from, from here really is that primarily GR is now using bare metal for our high performance workloads through Kubernetes. We do still make a lot of use of virtualization where it's appropriate, and where we are using bare metal open stack by running is a metal as a service of choice. I think that's it. So if anyone has any questions. Thanks a lot Scott and Jamie. Yes, there are like plenty of questions. Yeah. I am actually work with y'all. So it's nice to see you it's nice to save us time we have an internal meeting about how G research uses ironic. I was curious, I'm kind of thinking about integrations right like y'all say you're still using virtual machines for some things and it's clear that the majority of the machines are provisioning the ironic are for Kubernetes. Have you ever considered like, have you ever considered or is there a deeper integration between those like, for instance, center like like metal aid exists but it probably doesn't exactly fit the use case or, you know, like do you have a vision your hypervisors for the VM side via ironic like how do those things kind of mesh together. I guess the focus of this presentation was mostly around sort of Kubernetes ironic but actually we do use ironic for a whole bunch of other things with the organization. So for example our big data stack I believe is making quite heavy use of it and databases as you say as well. So what we might find though over time is, we've got a general desire to run as many things on Kubernetes as possible make Kubernetes kind of like substrate for our data center. So as more of those other technologies and themselves running on Kubernetes and maybe that'll be a more appropriate time to look at a tighter coupling between the bare metal service and Kubernetes itself but for the moment. Yeah, we're there sort of deliberately different things I suppose. And just on the note about building hypervisors with ironic is not something we do currently, but here's something we're looking at doing kind of the next well, sort of by the end of the year sort of roadmap. Yeah, I mean, it's, it'll be really ideal for us. When we sort of built the opposite platform, we use another internal sort of build system that we have. It makes sense for us to kind of migrate over to ironic now that it's kind of getting a lot of usage to kind of practice what we preach I guess. Any other questions. Yeah, I mean, I'm just going to say the reason I was asking that is I know that with ironic we kind of have been thinking about other integrations and stuff and so it. We should chat sometimes about like what what your wishes would be along that because it could be fun input to those discussions. More questions. Before I start. I saw early on when you do configuration of your system that you put to a pixie image. Is there been any thought of trying to migrate that functionality to do as much outside the box as possible as in lieu of actually booting to a to a small image to do additional configuration. Well, we use deploy steps. So they, they do it out of band. They still needs to, it still needs to boot into the round. The reason I ask is, it looks like one of the things you do with that pixie images you set up bio settings, and there are out of band interfaces to manage that like using red fish. Yeah, so sorry for the bias but that's done by deploy steps and that's all out of band. Scott Scott for the for the recycling you said like you recycle all the nodes every 28 or 30 days something like this. What's the what's the failure rate in terms of like the note does not come back from that cycle, like like it's either failed or. So how can we step in and like manually kick notes. Yeah, it's an interesting subject. We are this is kind of what what is kind of work in progress at the moment. I've actually got two meetings after this about that particular question. Yeah, it's going to get better over time and there's, there's having lots of transient issues and stuff at the moment, like things that we kind of need to just work out why they fail and kind of make some configuration tweaks on things, the failing halfway through and then not it not being picked up like by ironic that that's actually failed. And it kind of just sits there indefinitely. There's little things like that we didn't need to get out of the way. I mean I asked me that question again in a couple of months and I'll give you a much better answer. But yeah it's kind of something that's being what works like cost team between like some of our customers and us because it is definitely a new thing that we want to be able to do. So from our point of view that the plan is really to have effectively completely rolling automated rebuilds. So we already have some automation which is able to within our Kubernetes estate identify nodes which are drifted from the desired configuration because something's moved on and say GitHub, or they're just older than a threshold, you know approaching 28 days. And then we effectively have a process to cordon nodes wait from strain or even forcibly do it if jobs are preemptible, and then run that system to then talk to ironic rebuild the nodes, put them back in the farm. So we, as Scott says basically we need that fundamental operation of rebuilding a node to be as reliable as possible and then we just want to hang on top of that. But certainly the desire is to have sort of, I don't know, five nines kind of availability and reliability of that service and that's what we're working towards. The main purpose of the cycle is basically to keep all the nodes in the same level. There's, there's a few purposes one I suppose is to reduce our actual operations and you know the moment we have quite a lot of click hops if you want to rebuild a machine so I'm going to have to choose to do it. Really when you're managing a very large state we want to just set the desired state in GitHub and just trust that it will be tending towards that and also have all the ways to test to prove that everything's still working. Also, it's just a sort of, I guess, ensure that we're eliminating any kind of gremlins like you know a lot of stuff in computer science often boils down to turn it off and on again and really if you can make sure that's being automated from the ground up and you kind of eliminate a lot of those problems, possibly you might actually end up this this is something we'll find out I think personally I think we might end up masking some of those problems because we're sort of doing that thing so effectively and actually sometimes running something for a long time really allows you to draw out those kinds of bugs but if it's not a problem and actually physically just rebuilding everything every 28 days automatically makes that go away than who cares. Right. Sorry. That's the kind of driver though. Okay. Sorry, Jay. I was just going to say I think you hit it on the nose that are avoiding a lot of problems because I know that there are in a lot of the hardware and Ironic on there was a history of where like, if the BMC didn't receive a command for 60 or 90 days it would sometimes just fall off line and things like that and by rebuilding them and like touching those machines every 28 days, you are probably avoiding a world of hurt that like is it's sort of it's sort of interesting to see like how the requirements of G research like specially enable that kind of workflow which helps things work more smoothly with Ironic. You also have a lot better assurance of your configuration because you know it's been, you know, flexed all the time and you're actually building stuff constantly whereas, you know, you're not finding stuff's been checked into master and then realizing it's broken four months later or something like that. Right. I was just thinking like, like, in order to keep this the configuration the same everywhere like sending a note through a full clean and rebuild sounds like a quite heavy operation to achieve this as I was asking. So we don't do this at the moment, not even for our bachelors but we probably could. But of course I mean we have the issue that you know sometimes we have like hypervisors for instance provisioned by ironic or through open second or ironic that are with us for a couple of years and they basically have never been reinstalled of course the configuration is like kept up to date but you know for instance if they haven't been rebooted they still run an older corner or they may accumulate some rough just you know exactly so it's quite interesting. Dimitri you have a question. Yeah, are you doing cool interested in doing firmware upgrades as part of your ready state preparation. I might have done them through fire cleaning step. It's pretty slow. But yeah, it'd be nice to. I mean if we do it as a provisioning when that slow down provisioning quite quite a lot. Where if we're doing it on the way out then it kind of make I don't know to me that makes a little bit more sense, because then the users just how many bat machine and then, rather than just waiting around for a firmware update. Are you doing it in band or through Redfish or anything like that. We do it in band. So we pull down the RPMs and then we just on those. Scott there's no Redfish and do research. You're not using Redfish at all yet. Right. Yeah. Yeah. Yeah. Use the Arlo driver which basically uses Redfish to check power state and turn the machine on and off or that kind of thing. Another question I had is about your clean steps when you had like the nice list of clean steps. I was wondering which of these are like, like downstream clean steps versus upstream. Because for instance the GPU the GPU check for instance is not upstream. I know this, but all the others I think could be so I was wondering which of these actually like homemade things. I think the ones I listed. Just double check. Yeah. Like, like, what's that disk cleaning. Yeah, we've got updating firmware, which is us. Verifying Arlo settings. Sure. I'm not too sure about that one. Yeah. The hardware clocks is us again, I think. Yeah. So we're going to go into the array and wiping the hard disks. We've kind of fought how that works because we use like a appliance to basically encrypt hard drives. So I think, yeah, kind of briefly spoke about that before. So it kind of extends using like our own hardware manager. Because it's kind of a little bit special and is kind of a little bit to how GR does things. I mean, yeah, it's not a great deal. That would probably could be upstream. And then the check GPU health that basically is not actually running live yet. That's in our dev environment. But what it does is it essentially runs a like Jamie's guys came up with a binary that basically checks the health of a GPU when they when they start machine up. And they just basically threw that over to us and said, Oh, can you run this as a clean step rather than every time they boot machine and then they go and test it. It's kind of too late and it's a little bit like as I said in the presentation, you get to like the further down the line you get the more like expensive operation it is to actually get the node back out. So by marking it as a clean step. Yeah, I mean we could we could look at something like upstream in that. So it will be very interesting because we're like ramping up our usage of GPUs as well. And we were just into like, like, you know, health check or burn in but we haven't done anything yet. Yeah, so burning definitely an area where I'd like to extend and include the GPUs because it's kind of a bit of a blind spot at the moment you don't you kind of burn in the CPUs and the deep and everything and then you start an instance up and you check that there's not many spotters there but no one's actually touch the GPU until either that clean cycles run with that enabled or users actually got the machine so yeah it's definitely there's definitely room for improvement there but at least there's kind of the foundations in place with the kind of burning stuff for the CPU and everything that it's quite easy to extend and then we can just yeah but I guess that's where Jay comes in really so J's on our open source team. Yeah, I keep in busy looking forward to that. I'm sitting over here just celebrating because it sounds like you'll have a really good setup using a lot of the surface area of ironic and the right ways and it's like it. It's just exciting to hear like y'all just pulled it off the shelf and started using it this way, because this in terms of features is like one of the most fully featured ironic buildouts that like I've seen in terms of like hardware management and stuff like that and that's the part that I cared the most about when I was developing a full time here so that's that just makes you really happy to hear that and like I don't know maybe you'll hear that at bare metal all the time and I'm just now getting in on the love but I appreciate hearing all that. I heard of some interesting feature usage in Berlin that some amazed some of us like, we had no idea. This is you know testament to how flexible and powerful it is, but I want to take a quick step backwards. I'm unfortunately doing late because of award meeting, but I heard the magical words disk encryption. It's an appliance and it's part of your hardware manager. It would be interesting to collaborate upstream on adding some sort of hardware unlock capability I know there's some other or other organizations and I guess partner efforts that where they want to see disk encryption as a fully fledged feature, but also there are. There are a number of things that they want to achieve, and I don't know if it's necessarily a thing and ironic, but they do also want to do it attestation, which means an entire workflow interaction in the end. So it's one of those things that might be interesting to at least discuss upstream and see if maybe there's a point of commonality I guess it's where I'm kind of trying to hope for. Yeah, sure. Yeah, yeah, I mean, yeah, we push back quite a lot of stuff with especially with call answerable. So, yeah, I mean it's certainly something we can work, work together with. In any case they also want to support this for Windows as well, which surprising but we'll see where it goes. Sorry. I have one more question with respect to flavors because you said like basically flavors encrypt the availability zone as well well encrypt they carry the information about the availability zone. I mean that like you have every for every hardware type that you have you have like three flavors. I mean how many flavors do you have is basically question the flavor has the resource class. Right. And then the availability zone and the network is the thing that specifies where in the data center that it will end up. Right, but this isn't this isn't the resource class as well. No, it's not. So when you when you when you say like in horizon you've dropped down and click the the flavor that all that would have really as a resource class. And then you have a availability zone is another drop down. Okay. And then the networks and provider networks are kind of tied to availability zone so you'd have to be a bit smart with the way you do that. And then you have to be able to like AC one analogous to what you'd see in AWS really the flavoring codes like the qualities of the machine itself like CPU Ram and that kind of stuff but where it is and what network uses just a different. Right. You have you have requirements for more fine grained scheduling than availability zone. I have to have this specific instance in this specific rack. Is that a something that you have, because this is a requirement that I just faced with from our stuff guys, because they want to make sure that like specific servers on specific racks, rather than they create instances and then reshuffle things according to where they landed. I don't know that is that a requirement you ever faced. It is but we kind of achieve that through kubernetes more so I guess my team is responsible for making sure we have clusters and all the places we need clusters. And then we label and take notes properly so that we can have as people what's where, and then we can choose the work, you know, workloads can choose if they care where to run. And something we haven't done yet but starting to work on, for example, is like we say around like rack affinity and, you know, we've got certain very high performance workloads that want to have some kind of affinity to the same switch we want to be able to schedule a bunch of GPU jobs say that we know needs to be able to talk to each other very fast so we want to schedule them all in the same rack. Then you've got challenges around like, you know, preemption and how to actually get there and not have to be really inefficient and wait for it to be free or, you know, have large amounts of farm just sitting idle so you can achieve that whenever you want it so it's, it's complicated but yeah, hopefully answers the question. Cool. Yep. Thanks. Are there any more questions. Some of the folks at some of the US National Labs have struggled with rack affinity as well for some other HPC workloads, but generally those users seem to not really use Nova they're using more declarative inventories where they have an absolute configuration they're pushing out. So we'd like to move away from a position or it's not be in a position where we have to make people to aware of specific racks we want people to be able to say if they have that requirement and they can schedule their workload on, you know, servers which are effectively local to each other without them having to understand the topology of the data standard. And that's a huge value add for just using the software because I've had that same challenge and cloud environments for years. Yeah, where. Oh, I scheduled it. It's deployed. Oh wow it's definitely our Saturday center. Yeah, this is slow. Exactly. Do we have more questions. I'm kind of curious just to ask, like, is there anything that's been missing, right, like it seems like you'll have covered a lot of the hardware management like surface area that ironic provides has there been something other that's not like hardware specific like you already talked about some of your customer managers but has there ever been a place where you're like, does ironic do this and then you were disappointed that we didn't or like kind of the feature that we don't have that you wish we did. Yeah, that's a good question. I don't know really. Yeah, I can't nothing to bring to mind. I don't know about you, Jamie, because you're more of an end user. So yeah, I guess I gave a couple. I mean nothing major. I think from my point of view it seems that sometimes it takes longer than I would expect to maybe put a new flavor in. I don't know if that's the side effects of our particular automation, or it sometimes feels like ironic is quite opinionated about what kind of hardware can be run in it. For us it becomes less of a problem as more of our hardware becomes sort of more standard and homogenous. In particular, because we're going on this migration from sort of windows to Linux and at the same time expanding our estate that new stuff we buy all tends to be the same sort of shape and size so fits in nicely but what we also have is a large legacy of state of all sorts of shapes and sizes so we need to work out how to fit that in nicely and it's not always trivial to do that. The other thing and a very specific thing which I think, again, is not ironic thing is that we on virtual machines on Kubernetes we make quite heavy use of Cinder for PVCs for lock storage. That doesn't, I don't think that's a feature that's available within ironic so we need to find a slightly different way of achieving that. That's kind of sidesteps it for the moment because actually most of the workflows that run on bare metal Kubernetes don't need PVCs at all so it isn't a problem they tend to use remote storage, but over time as we move everything to ironic and through Kubernetes we're going to need to solve that. So that's that will be the only couple of things that spring to mind. Did you say Seth or just Cinder in general. I mean for us we're using Cinder as the interface I suppose and so for the actual data storage system behind the scenes. Okay, because the reason I asked because supposedly and supposedly sorry. I believe. If you send ice because the connection information to a Cinder. Or to send her to attach a volume then you get ice because the connection information back. However, I believe the stuff communities moving away from having an ice because he gateway. So it might be a short lived capability. We have, we have other options, things we've explored. We are even able to run storage on the Kubernetes nodes but we're sort of trying to avoid doing that until possible. But yeah, we'll see. It's not a major problem at the moment it's just something we need to solve. Okay, any more questions, comments. That's not the case. Thanks a lot. Again, Scott and Jamie for being with us today.