 Hello and good morning everyone. See you morning. Well, Feilong and I are from New Zealand and In New Zealand right now, it's 11.50 p.m. I just arrived a couple of days ago So if I feel a bit jet-lagged You guys will certainly understand what I'm going through here Look, it's Excellent to be here with you all very nice To see a bunch of familiar faces in the audience and meet you all again This is my friend Feilong my name is Bruno we work for a company called careless cloud and we are a public cloud provider down in New Zealand on the Little corner of the world map sometimes I just learned that people leave New Zealand out of the world map So there is a bit of a movement going on in New Zealand asking people to reintroduce New Zealand to the world map in case So please check your books and if we're missing there, please at us again We run three public cloud regions in New Zealand one in Hamilton one in Puerto one in Wellington, right three separate regions completely isolated from each other Each one with one availability zone and different means for high availability The topic of our presentation today is a bit of a journey that we've been going on for About a year or a year and a half. I would say almost one year And this is about us introducing New Zealand's first certified Kubernetes platform service running on top of OpenStack with the When when we're talking about the certification We're talking about the certification done by the cloud native computing foundation and making sure that the Kubernetes is offering on top of OpenStack Passes all the API tests to certify that open step that Kubernetes works as expected But also to guarantee application portability, right? one of the biggest benefits that our customers in New Zealand are looking for when running Kubernetes is is really that Guaranteed application portability between public cloud providers. A lot of our customers have you know are mostly targeting Their customers base in New Zealand, but some of them are also exporting their services to other countries They have a presence in Australia. They have a presence in Europe And what we really want to do for them is to allow them to use any public cloud provider And so long that public cloud provider has a CNCF certified Kubernetes offering that they have through application portability between cloud providers, right to me That's the biggest one of the biggest benefits of Kubernetes it is that real dream of through application portability that I think as Cloud providers many of us have miss at that infrastructure as a service layer With APIs that are not always compatible. Maybe for us in the room. That's excellent because we all run OpenStack I hope and we have a degree of portability But when I'm talking to customers in New Zealand who also have a presence in Australia, maybe with Google or Azure or something in Europe They they really need to make sure that without changing anything at all their application will just run so the Journey was about Making Kubernetes is on top of OpenStack production ready in Pretty much everything that we do at catalyst We only use open-source software all the development we do is upstream We never retain any of that code for ourselves. There's no secret sauce. There's nothing that we keep for us And of course we also acknowledge the fact that we have Gained a lot from the OpenStack and Kubernetes community So in terms of making Kubernetes production ready There were four elements that we were taking consideration one of them was strong data security Elaborate on each one of those later. The other one was high availability and resiliency Good performance and scalability and finally easy of use So when we talk about strong data security One thing we wanted to make sure is that if customers are using in our case We built a solution using Magnum and if they are deploying Kubernetes That is being orchestrated by Magnum. We wanted to make sure that Kubernetes could use Keystone as it identity an access management provider so that we had role-based access control to Kubernetes using the same Username and passwords that people have already created in our public cloud, right? It doesn't Doesn't make much sense if we give them The ability to add more users to define roles and permissions in our OpenStack Infrastructure if nothing is inherited into Kubernetes itself and in Kubernetes lens We only gave them an admin username and password and off they go and in there they create their own users Etc. So the intention here was they should be able to use their keystone username and password Not only that they should have different roles in Keystone Maybe an admin role a developer role where you can create containers and a read-only role, right? and That that was part of the work that was done by Feilong and a few other people in the community And that integration exists now and it works and there was a post by a friend of us called Ling Shen on the OpenStack super user blog where he explains that integration step by step And there is a bit of a demo there showing how that works. I highly recommend reading that blog post by the way The next one is when in New Zealand, we also help customers to run private clouds, right? We actually don't don't sell OpenStack as a product. We allow them to run OpenStack From upstream but we manage that private cloud infrastructure for them as if it was one of our regions We use the same software the same team of people the same experience that we Developed over the last five years running a public cloud based on OpenStack We apply that to their private cloud infrastructure and from a lot of our private cloud customers one of their biggest requests was network policies inside Kubernetes right For those of you that are not familiar with network policies They are akin to security groups in OpenStack where you're pretty much saying this pod can talk to that pod Right, but using the native Kubernetes constructs and to implement network policies you need a network back-end that supports network policies and Work that Feilong has done Recently is to introduce support for Calico in Magnum So when you deploy Kubernetes with with Magnum now you have in our case It's the default choice on the catalyst cloud use the Calico network back-end and with the Calico network back-end You have support for network policies I'll skip ahead a little bit That will touch a bit on performance later But another reason for using Calico is because when we ran some tests with the the full Flannel network overlay that Magnum was using just to give you an idea in terms of performance We were doing a test where From a hypervisor to another hypervisor. We were getting 6.5 gigabits per second of network performance When we did the same tests from a pod to another pod that performance dropped to 400 megabits per second, right? We were losing a lot of our network performance With the standard setup that came out of Magnum So part of the work that Feilong was doing as well with the implementation of Calico was to make sure that the network performance Was as close as possible to the neutral networks, right? And after doing that work, I believe we've got to something like 6.3 gigabits per second Which was very very close to the maximum performance We were getting from the hypervisors themselves and at that point we said this is good that this will scale, right? So those were the two reasons for choosing Calico as a network back-end And finally a feature that is really important if we are managing hundreds of Kubernetes clusters on behalf of customers is Rolling upgrades and patching, right? We need to make sure that we can keep on patching that Kubernetes is infrastructure and that we can do major upgrades as well So this is a feature that is a seal being actively developed upstream Yeah, Sparrows from CERN is working on that and I think the patch is currently is in good shape Probably could be merged in in this release So in our case, we are currently working with the Kubernetes service in tech preview with our customers. So we made it clear to them That currently if you deploy Kubernetes, you'll be stuck on that version We don't have the ability yet to upgrade to the next major release or easily upgrade to the next major release And it's one of the features that we are waiting for Before we can call that service beta and start engaging a bit more with customers or more serious workloads Now the next one is high availability and resiliency, right? When we looked at Magnum originally the concept of highly available master nodes and highly available worker nodes was not present yet Right and of course if we want customers to run something serious on top of this Kubernetes offering We need to make sure that if a single hypervisor goes down that it doesn't take multiple master nodes or multiple worker nodes down at the same time Right in the case of the master nodes That's because you have an hcg cluster and a bunch of Things in there that rely on have rely on having a certain number of nodes present to be available And in the case of the worker nodes if customers are deploying something where they say I would like a replica set of X They don't expect that a single virtual machine being affected by an incident to impact their application, right? So Feilong did some work on that as well And now there is support in Magnum for highly available master nodes When you do that it creates a load balancer in front of those master nodes and that load balancer It's another feature introduced there with a support for Octavia using open stacks a native load balancer and also for the worker nodes what we've done is by the way what we've done is just use at the moment the Server groups with anti affinity So the master nodes they are in an anti affinity server group and the worker nodes are in a separate anti affinity server group Right there is some work being discussed with sparrows from CERN as well in terms of adding support for Availability zones so that you can use different availability zones to isolate the master nodes, right? And finally a feature that we are currently working on is auto healing So the idea that the you guys probably know that Magnum uses heat And in orchestration templates heat orchestration templates to deploy Kubernetes and manage Kubernetes At that point heat has got a feature to do health checks on your master nodes on your worker nodes And the intent there is that we'll have those health checks going on all the time Regularly and if we detect that a given master node or given worker node is not healthy that we would rebuild that node as fast as possible All right just to keep the the service always available always in shape Now in terms of performance and scalability I have already touched on network performance But the other one was storage performance in to our benefit here. There wasn't much we had to do in this space so Magnum already supported that integration between Kubernetes and Cinder So if if a customer is asking for persistent volume is doing a persistent volume claim There was already support for Cinder to create that volume in that volume to be attached To the pod that needs that persistent volume and based on our tests the performance that you were getting there as expected It's pretty much the raw performance that you get out of the volume if you were mounting that same volume on the virtual machine So that was pretty good. I don't think we did any changes to that. Have we cool So a thing that we're working on right now is the time it takes to deploy the Kubernetes clusters We went live really early with with the Kubernetes service on on the Catalyst cloud In in New Zealand as far as I can tell we're definitely the first public cloud provider to offer a Kubernetes platform, but I know that even in Australia some of the Global public cloud providers don't have Kubernetes is available yet on their clouds in Australia, right? So we went live really early on and that's because we wanted to encourage a Quite close feedback loop with our New Zealand customers So we could understand what they need and how we can shape that service to fit our unique needs there in New Zealand and I guess the trade-off there is that in the process of doing so We produced our images our container images upstream And they are currently on Docker hub they are using the open stack Upstream infrastructure, but what it means is that every time a customer is deploying their Kubernetes cluster in New Zealand currently they are going all the way to whatever Docker hub is hosting their images And back to New Zealand and that takes a long time, right? It's one of the issues that we have being so far away from from other places is that network latency is A big deal in in our country and the result of that is that currently it takes if you're deploying What we call the development Kubernetes template which has One master node and one worker node that's taking about five minutes to deploy But if you're doing the production template which has three worker nodes at least and three master nodes at least That's currently taking 15 minutes to deploy And and honestly that feels like ages nowadays, right? It is several coffees for some people So we definitely want to reduce that to no more than five minutes ideally three minutes, right? We're being realistic here. We know that we are deploying and building all those master nodes When when customers ask for provision Kubernetes But in parts that the process of doing so is actually making sure that we have a local Docker registry And a few optimizations that we want to do that would make the whole bootstrapping process faster And finally when it comes to horizontal scalability There is a feature that we haven't worked on yet And I'm really interested to know if anyone here is actively working on it And that's the ability for Kubernetes is to add additional worker nodes based on its own knowledge of The capacity of the cluster right so Kubernetes is aware of how many pods we have deployed What resources are being currently utilized and it could tell us hey I cannot schedule pods anymore because I ran out of compute resources, right? So the intention there is to say when I get to 80% add another worker node on my behalf And customers can put a limit to that on how many additional worker nodes could I could I add so this is a feature that is See you being developed To be more clear currently in my kingdom you can scale up or down manually But Bruno is mostly talking about auto auto scaling as opposed to someone going there It's absolutely possible for people to go in there do a single API call and say go from three worker nodes So 10 worker nodes or back to three that's easy, right? What I'm talking about is the proper auto scaling where Kubernetes itself is adding more compute capacity as needed and finally the last one is easy of use so The patterns that we are motivating people to use in New Zealand are definitely to you know drive drive it via the API's Use your preferred infrastructure DevOps tools to drive your cloud infrastructure And our customers over there. They really enjoy using Terraform. They really enjoy using Ansible those are the two primary tools that most of our customers using production So part of the work that we've been doing is to introduce that support to Magnum API's in this case to Terraform and Ansible. So Phelon developed module most of work How down is for Ansible and I think most of work for Terraform has been done by Blizzard. Oh, yeah, true. Blizzard did the work on Terraform, which is wonderful, right? It's all about that international collaboration that we want to see happening on this stuff And Finally the last one that I want to touch on easy of use is the close integration with the open stack infrastructure layer, right? So what we have learned in While running Kubernetes deployed by ourselves, not by Magnum in the Catalyst cloud for the last two or three years is that What makes Kubernetes is really awesome on a public cloud environment is that it can actually orchestrate actions on the Infrastructure layer on your behalf. So let me give you one example If you go to Kubernetes and you say I would like an ingress controller for my application You expect that Kubernetes will create a load balancer in open stack for you Not only that you expect that Kubernetes will create the layer 7 Routing rules in that load balancer on your behalf for that application, right? So part of the work we've been doing is and that was also work by done by Lin Shen I'm not sure if Lin Shen is in the audience here But part of what we've been doing is to work on the cloud provider for open stack with the Kubernetes community So that there is better and broader Support for all the open stack services, right? So just a quick Explanation of how far that work has been done so far the load balancing integration with Octavia It's pretty solid right now As I said with block storage It was already solid when we touched it with object storage in case customers want to use object storage as their registry They're their container registry for for their images that is pretty good as well Virtual machines for additional worker nodes all done the network integration with Neutron It's solid and finally the integration with Keystone for access control There will be probably more touch points later, but what we wanted was that really Smooth integration between Kubernetes is an open stack, right? Now would you like to cover next steps Yes, Bruno mentioned we are using microme open stack microme. So We we are talking about the production ready journey, but as you can see it's still not really So, I mean, there's still some work need to be improved So for upstream what we will do need to do next is The health check and auto healing Bruno mentioned there is one solution use The the heat health check and there is another solution. We are talking about is using the node problem detector plus The the Drino and auto scaler to to fix the to Kubernetes cluster automatically and The rolling upgrades and for results clean up on deletion that one a little bit tricky We we have seen some customer complain open TKs for us I can't delete our Kubernetes cluster is because user has created load balancer load balancer service on top of Kubernetes cluster, but Magnum has no idea which one because the load balancer is created in the same network and Magnum has no idea which load balancer I I can delete We have currently we have solution for that, but that probably needs a chair peak in Kubernetes from from current master branch to I Think we want dot 11 dot 12 Another not to have five and we have tested working. So Yeah, probably could happen in the next couple weeks or months I don't know. Yeah, just to be clear on that one We already have some code that is doing that clean up It will probably evolve. It's just a starting point But at least now Kubernetes can tag whatever resources he created saying this belongs to this specific Magnum cluster with the IG there so that later on they can clean up prior to the leaching the resources And the next one is a ingress controller integration with design it We as a brand mentioned we have done some work for the Octavia ingress controller and probably the next big atom is integration with designer and That's for our stream for downstream for catalyst specific We probably need a dedicated container registry currently. We're still using doc hop in yeah for performance Issue. Yeah, we need a dedicated one just for for catalyst cloud and Another one is dedicated discovery service because in my room the default discovery service is the discovery dot Etcd dot IO and that one is not designed for production use just for demo and the maintainer is It's talking it's not Really actively maintained so we probably need at least deployed local version for that one because there is a container image for that to deploy it dedicated for for catalyst cloud and we probably Proposed a patch in Magnum so Magnum can support a cluster at least a list of discovery service So by default you can use the public one and if the public one is done you can just use another one in the list And we also need a pipeline automated pipeline to quick create test and release the whole The whole work that the one need to be done as soon I think as soon as possible It's that one is a bit tricky because recently we we ran into a problem in Incombinates v1.11 with that With that release you probably losing the the internal and external IP address for your Worker knows that the bug in culminates and it has been chart pick To it's fixed in the v1.twire, but it has been Really merged in the v1.11 Yeah, and this one is quite important, right? It's not that we are planning to have our own internal pipeline where we have stuff that hasn't been developed upstream It's just about the velocity that we need If there is a critical bug or if there is a critical security issue What we have found is that in trying to work with the Kubernetes community upstream Something that was really critical took too long to be merged and to be Available for us to use so we'll definitely need to have that ability just to make sure that if there's something critical We get it sorted as quickly as possible while we are working Upstream to get those images sorted upstream, right? Yeah, we would like to use as much as possible the upstream public images as possible but for Some case you think the bug is critical, but the reviewer thing is not critical So yeah, yeah, we we are very very aware of producing or running any code that we haven't developed upstream and Merged upstream first, right? We are very aware of that risk. Yeah, we Very rarely do that with OpenStack itself Sometimes it's needed and what we've learned is that the same thing is needed for Kubernetes, right? But we are not going to use that bullet very often. Hopefully. Yeah So there are some tips Like lesson learned we learned from the the technical preview when we run Migrant so currently there are some limitation so don't use overlight or overlight tool as the the story Docker started driver we is Docker volume side so with that Combination you'll probably run into some problem And the problem is you can't create any container So if you want to use overlay or overlay tool, just leave the the Docker volume size is empty and another one is Limitation we have mentioned that make them is not aware of the resources created by communities But I think it should be fixed in the next couple weeks And there is a bug. Yeah, I just mentioned and you can take a look in Cloud provider OpenStack issues 280 for more details and versions I Would suggest using Magnum low key, but at least use Magnum quince and for heat I would suggest use stable quince as well because there is a multi-region problem multi-region bug in heat So if you don't use that Yeah, it doesn't work for yeah, if you are running a multi-region So that was a pretty interesting one for us to learn right because As a public cloud provider like many of you we have a pre-production Environment now in our case one of the current limitations we have with our pre-production environment is that we cannot fully Emulate yet that the actual behavior that we have with three production regions in New Zealand and the interesting thing is Up to pre-production whatever work we've done with Magnum and Kubernetes was working really well And as soon as we roll out to production this bug came comes up, right? Magnum doesn't work with multiple regions and that's because of a bug in heat. So That was a very interesting surprise because to be honest. We thought there were people already running Magnum and Kubernetes is in multiple regions before so we were surprised that we were the first ones to stumble on the bug and have to fix it But it's sorted now. Yeah, just use the latest version. Yeah So that's all is any question If you could use the microphone for the recording, that will be good. So you mentioned the Integration between Kubernetes and Keystone. Yeah, and with a solution by FANG, I think And the solution by that you mentioned the blog from FANG, I think And we did a little bit further went a little bit further because instead of using passwords We introduced the application credentials, which is the new features in in pike So we produced a an upgraded to the gopher cloud library So you can use the cube KTK cube KCL cube control with the with application credentials So you can create your own application credential and use them to I mean to deploy applications on Kubernetes without going through passwords Because we don't use password. We use Federated applications so we have no understood understood So the the important thing is the work we've done is pretty clean in terms of the interfaces used We're just using the the Keystone API and if you have a valid token be it from an application or from Regular user with username and password that valid token will be accepted so long It has one of the roles that give you access to magnet right and in our case There are three different roles as I said admin Developer and read only right, but there's no reason as far as I can tell why application credentials wouldn't work Actually, that's also something that we would like to do Very much want to move on to the next version of Keystone to use application credentials Thank you in our case just to let you know there is a presentation being done by a friend of mine called Adrian on new open stack projects that just became part of the Official open stack projects recently called edge agent and edge agent is something that for us as a public cloud provider helps us to streamline business process workflows, right? I'll broadly say business process workflows So what I mean by that is a customer signing up a customer terminating their account a Customer inviting someone else to join their project in open stack and that process of sending the email with a Link to validate their email that expires automatically and so on Edutant will automate these workflows And the interesting thing in our case is that via Edutant our customers can already create fake application Credentials right It's not that it's a clean feature like proper application credentials But they definitely have the ability to create additional accounts that they can use for applications only and they have the ability to say this Application account has got only this role. So It's not that we are as impacted as you probably are Because we are running Edutant, but we certainly want to use application credentials as well Had you considered using Kuru Kubernetes, which is CNI plugin that is using Neutron to network pods I'm not sure if I'm familiar with it Okay, so it's it's an open stack project Are you familiar with it? Kuru. Yes. Oh, okay. I just didn't understand. Yes. I have actually we have very much so Okay, so we got in touch with the career community This was about a year ago right when we went when we were going this journey And I don't know if you remember but a big feature for us or for our customers was network policies, right? And network policies were already fully supported by Calico when we Started doing this work not to mention that in the Calico community what we have found was about We're about 50 Developers that were really active working on Calico right and developers from multiple companies the source of healthy Open-source collaboration that we want to see lots of developers from multiple companies truly looking at it as an independent project And we didn't find that same level of traction or agility in career I Understand that there are some things we could do even better with career because the integration would be even closer to open Stack itself But a year ago. It was not ready, right? I one of the things I want to do at the conference this time is to check how career is doing go to the Update session there. Yeah, we have it today Yeah, 222 I think and and the intention is to understand how far we we have the features that we need And at that point it's pretty much looking at the templates and introducing it because one of the nice things with Magnum is that you have Flags that you can pass saying I would like to use this network back end and currently we pass that with Calico, but we could there's nothing preventing us to say use career and introduce support for Korean parallel And then if that becomes better than the Calico back end we can switch to career later, right? Thank you But it has been considered and I would love, you know Hi Do you encounter some troubles by implementing higher availability using Octavia With Octavia specifically yeah specifically with Octavia Look, I guess first the first part of the question have we found some troubles. We found all the troubles you can imagine Lots of troubles, but but it's part of the journey in it's something that we really like a catalyst, right? We are Trailing that you know bleeding edge software and doing things that people haven't done before So if you're not finding troubles and if you're not solving them as fast as you can Then you're probably not doing something that is worth your time Now specific issues with Octavia for high availability Yeah, yeah, there was one bug there was one bug that is probably Worthy to mention it has been fixed already But there was a bug where Octavia lost its connection To the my sequel back end and when it lost that connection it said hey, I don't know the state of this and for us so I'll just Recreate them or delete them and in the process of recreating them Both and for us became completely unavailable and at that point the load balancer just stopped working for the customer impacted, right? In Untangling that bug we found some things that really needed to be improved in Octavia for example Octavia was actually deleting the Load balancer information from the database instead of marking it as Deleted so that we could go back and reconstruct that load balancer with whatever it was done before But all of that has been fixed already And if you're running from stable or from from master you shouldn't be affected by those bugs But other than this one bug no we haven't encountered More so far and if you have something you can share with us to prevent us from hitting As far as I know Octavia is just one VM using using a Lot bouncing technology. No, but yeah, no, no, no, okay Octavia is not just using one VM right depends how you set up Octavia if you set up Octavia properly It will deploy multiple virtual machines in our case with the load balancing service on the carless cloud We deploy at least two virtual machines So you you are using a dedicated lot bouncer on top of that a dedicated Dedicated load balancer technology on top of that. No, no, bye It is just a no dedicated load balancer. No external hardware. It's so implemented in software It's all Octavia open source all the work we've done is always using Nation of OpenStack software, right? So Octavia is deploying to on for us. The on for us have HAProxy running in them And it sets a high availability again server group anti affinity so they don't end up in the same hypervisor And it creates I believe it's using keep alive G to create a cluster between at least two of these VMs So it's highly available So we can actually lose one of the load balancers and it will continue working as expected, right? So that's why I'm saying no high availability issues with Octavia in this regard But by using keep alive you you succeed in in the balance Bouncing the Yeah, okay, great pretty pretty solid. Yeah, the bugs that we found in Octavia were set more subtle than that. Yeah, okay Thank you. No problem. Hello. Yep. Hi. I Want low I want low which kind of ecosystem to us and But the catalyst the cloud users Used to promoting and the delete so they resource in the cloud So so you I'm not sure if understand the question when you say which ecosystem are you talking about a specific open-stack? Yeah, or distribution. Yeah, I think something some tools like a turbo or Open-shift or I'm supposed something like that In in our case as I said before cold that we are running is Vanilla open-stack from upstream. It's not a distribution from any vendor The Kubernetes code that we are running is vanilla Kubernetes from upstream not a distribution from any vendor, right? Okay, and you mentioned I say you you developed the answer module and the telephone provider for the Calis cloud. Yeah Oh for upstream. Oh upstream. Yeah, no upstream. It's all upstream. Oh the upstream all upstream It's not for the catalyst cloud specific, right? The terraform and the Ansible modules that were created in this journey. They are Actually open-stack modules not tell us cloud modules It turns out that the catalyst cloud runs on open-stack So if you use the open-stack modules, it will work with the catalyst cloud, right? But the work that we've done there you guys can use in any open-stack cloud Okay, I understood. Thank you. No problem I'll just assume that we have time to you know until someone interrupts me and says that we no longer have time But so carry on questions. Okay, so maybe a quick question Mentioned that you don't really support upgrading the Kubernetes cluster because it's challenging as of now Yeah, so is it isn't easily upgradeable. It is a so the question is it's challenging to upgrade the Kubernetes cluster It's not that challenging, right? Because if you're running with multiple masters and multiple worker nodes one of the nice things about Kubernetes That is it is truly a stateless application, right? So you can actually Queue one worker node at a time and roll the new version of Kubernetes there and you could also do the same for the master nodes The the process that we're going through and by the way behind the scenes. That's running Fedora atomic Computing sense where we deploy a bunch of Docker containers that contain all the Kubernetes components that we're deploying right so it's extremely easy to upgrade What we are doing is the code in Magnum and the code in heat that actually allows you to Orchestrate that upgrade process in a way that will not take applications The customers are running on top of Kubernetes down as we are rolling the upgrade, right? And that orchestration is about taking one master node down at a time one worker node down at a time And as you reintroduce them back to the cluster making sure that they are healthy and making sure that they are working as expected Right and at that point if the applications deployed by customers have Replicas set of two or more then their application shouldn't go down as they do the upgrade And one nice one really nice thing with the upgrade process is that's going to be a very simple API call Where customers can trigger the upgrade themselves and they can say I would like to go to this version of Kubernetes Right very similar to the way people are doing that with the Google Container engine Thank you. I Just wanted to ask you mentioned that integration with sender for persistent volumes is Pretty unproblematic. Do you have a need for RWX read write many storage as well? Do we have a need for for read write many RWX storage as well so so where you have multiple writers To the same volume multiple things writing to the same volume. Yeah, it is curious Is that anything that no so far that need so far the need of? Volumes that are mounted in multiple computing instances and exposed to multiple pods at the same time that hasn't come up with our customers Interesting because I tend to think about that as an Not so good pattern for applications that are running on top of Kubernetes I would expect them to use storage in a different way But no because we don't have that need we haven't Seen that problem or we don't know if that's supported right at this point because Kubernetes is just asking sender for a volume I assume that if your cloud already has support for Volumes that can be mounted to multiple computing instances that that would work right because it's just using this standard sender API and the senior interface So Block devices aren't going to give you that if you do have that need And there are some design patterns. I think where where it does come up For instance, if you have containers collecting data and and writing them to a common area that thing gets aggregated Yeah, so Manila can fill that role complementing complementing sender And I'm just here is Manila PTL to point that out. So if it comes You know that that's a that's a good fit. So and Manila has Since it serves ups it doesn't serve up storage by a hypervisor it just serves over the network So it doesn't care if it's bare metal or container or whatever this I'm assuming it. So just you know, keep it That's what I was going to say when you when you phrase the question originally talking about seeing there The reason I said that's a pattern that I wouldn't encourage people to do is because if we were approaching that with customers The first call would be can your application use object storage and if their application cannot use object storage Then it would be a Manila network fire system You the other aspect is you need right for the case for Manila would be that you need random access rather than working with The whole object coming in and out and So just just just checking Calibrating and because of your interest in Manila We we have plans to roll Manila on the Catalyst cloud as well We run safe as our storage back end for block storage and with Cep FS We pretty much have a pretty decent back in for money. Let's have us and then we have an NFS front-end Cep FS as well. Exactly. So cool. Look, I'm pretty sure we are over time now Thank you for all the questions. It was very interesting hearing your questions. Thank you for having us