 Okay, hello everyone. So I think we're gonna get started now It's it's actually about time to to start So thanks for thanks for having us today. It's a pretty good turn out. I'm Sebastian. I work as a step engineer Where were oh, I work for reddit. Yeah, sorry. Yeah, it's one of the slides anyway, but yeah, okay Hi everyone, my name is John Coyne. I also work for reddit. I'm a product guy And I've been doing this with Sebastian so for quite a while today We have the honor to share the stage with a new member of our team new member old member on the work side But new member on stage Julie Fidenta Yeah, I work for reddit as well and I work on the integration of self in OpenStack using triple so the our deployment tool for OpenStack and stuff All right, so let's get started so I Hope you like the pictures, but I think it basically tells the story when we started OpenStack and I'm proud to say that I'm one of the Pioneers probably in at least in the storage aspects of OpenStack, and this is my 11th. It's done it We have designed OpenStack to be an open source option of AWS pretty much All right, I'll go back So what we designed OpenStack to be something like an open source version of AWS We're so looking at the main data centers to service our clouds, right and The original design of OpenStack was not necessarily to go all the way to the edge, but here we are today in 2018 Trying to do exactly this right and the reason is a lot of change. So we're gonna try the next 40 minutes Not just show you what's happening in the current release of staying Where we're putting the effort and investments as a community But also where we're going after this summit and pretty much for the next several summits, right? Where is going to be a focus area to try to make it happen? So We'll start with a quick reality check talk about some of the use case talk about the landscape Talk about 5g and how we change things and then we'll go deeper to understand the edge factors when we talk about edge Guess what? It's a heavy buzzword. Have you used and it doesn't mean all the same times All the things so when there's when we say edge There's one thing to have an edge side and a far edge size all the way down to the IoT device And I can tell you right now. We're not looking at all of the edge use cases, right? We're very practical We're going to talk about the distributed requirements of edge From the edge factors 5g by the way comes by the standard with the distributed cloud out of the box And then we're going to talk about where we are today Sebastian is going to talk where we doing done over the last years We showed you what we can do together with Colocating seph and open stack together in a hyperconverge fashion Today what we like to do is to show you as we push open stack to the far edge What does it mean for storage right? How can we get more? Basically put storage in a box closer to the edge and still deal with the image Synchronization data availability all that stuff, but we've lower latency and lower footprint Obviously we're going to mention containers We're going to talk about what we're doing at stain release specifically and then what's the roadmap as I mentioned for going forward I Want to start with some observation? We started this journey as I mentioned open stack looking at a cloud, right? But something that happens outside of our walls open walls should I say it's like we're getting smarter and We're getting smarter every year in terms of our capability. In fact Next week on this stage We're going to have smart countries conference, right? So You all of you guys are using smartphones, right? We're talking about smart countries already and this is taking place in this in Europe here in Berlin next week And so it's not just the end point devices We're going to care about is like how they're going to connect it to the biggest story if you want to go one layer down smart cities Believe it or not one of the use cases of 5g's Networks is smart cities and if you go back to your The way you got here today Cars our cars are getting smarter at some point. They don't need us anymore Right, you can just enter a car with self-driven. We have AI. We have machine learning. It could do all alone We have a lot of companies investing now in this capability as you know My own car is a hybrid car that I have some of this function out already can drive on its own Huh, how many of you don't like to take car to work and actually bike to work raise your hand, right? the green people All right Let's do a quick reality check. So we're talking about augmented reality capabilities This is one of the uses that's coming our way But we still want to take that journey and this is a digital transformation journey, right? You heard the key notes in the morning some of the segments like we had like digital bank, right from completely online and doing the disruption but The way we're going to communicate is different One of the things I want to start to adopt in your way of thinking is This bike regardless of the new capabilities we're going to gain with the new services are good. It's going to move The motion is still there and the way we're going to move from one point to another is going to affect the service And where are we going to get the service? We are in a cloud infrastructure open cloud infrastructure business This bike will take me to the next stop where probably can be a natural park. Guess what? There's limited antennas in the area. There's maybe one close point of presence that I'll be connecting to I'm going to get the service from that Closest antenna in that natural park when I'm going to drive my smart bikes, right? That's what we need to solve in the programmatic way so I mentioned 5g and I mentioned that Part of the standard the 5g is the distributed cloud by nature and as we drive our bike or cars Smart cars from one point to another the user experience continuity is key. I Can not drop my service. I want to manage my floating IPs Okay, doesn't matter where I take my mobile device and that mobile device can be a drone at some point So we still need to care about the service obviously 5g comes with endless numbers of number the same cloud Tenets and endpoints we used to support in our day one when we designed open stack now We need to support thousands of units connected to our clouds, right? So and again, I also need to maintain the reliability, right? Everybody in the telco business know that they have five nines Nothing have changed nothing have changed from compliance nothing have changed from security. In fact security is the worst one, right? because at some point if I have a There's a chick-fil-a use case. I'm not sure if you've seen that right you can have an edge box in every retail and Someone can come to your retail endpoint and take the box and leave as They leave with the box. What have they taken with them? How do we secure the box? What actually lives on the box? All right, so that's the new questions you need to start asking yourself and Obviously, we're not gonna start with tackle each one of the edge use cases a lot of us are in a v Players or enterprise players with edge use case some of the edge customers. I'm serving to they're actually public sector And this is like we need to that have that new capabilities closer and closer to the customer premise But again as the use case it's not just retail as I mentioned It's like your home smart homes smart cars and so on and so on we already talked about the virtual reality aspects and use services so if I'm a Consumer then I really want to get the ten-time faster speed That doesn't matter where I am with my new device, right? I also would like to get the service not just faster I want to get the new services. It's not about doing things faster. It's what actually unlocks with this new capabilities IOT is around the corner with this new Capabilities and when we look at the edge it comes with a lot of constraints 5g specifically like latency, right? The distance is less than a hundred kilometers. This is what we're trying to solve, right? Bandwidth is limited the resilience. I have to make it autonomous as I said that box may be Disconnected from the network for hours days weeks and suddenly pops up again in the network. Hmm. How do I push new application? Updates to it over the air updates How do I maintain them made of data then you made it out of the images and so on, right? I mentioned the regulation that I'm not going anywhere actually enforced at the edge And obviously I need a way to do the all day, too, right? I need to do a to do upgrade to the box, right? So nothing goes away from that requirements One thing that I have alluded to is the scale, right? We're talking about 10 to a hundred skis of sites and Each one of them basically serving Tens of node right and I'm going to visualize this for you So one of the things I heard in previous session we had on the working edge People have the assumption that we're providing services from the centralized site all the way down to the far edge And I have to correct that misunderstanding. This is not what we're trying to do What we are trying to do is actually provide you that service if you're in the r8 edge block, right? I want you to get the service from the closest point of presence to you And not from our free right or c1 for that error. So that's what we're trying to do I need a way to deploy images obviously from the centralized control plane to dish eds boxes But the service is actually much more limited. It's a smaller problem to solve, right? And this is another great visualization of what I just said I'm not trying to take that one-to-one ratio that we had in the original data sensors and apply all the way down Because guess what? In the previous session someone mentioned the IO. I'm not gonna push all that IO down all the way to the far edge There's no point, right? And when we talk about edge, there's basically three factors you need to bring in right The first one is deployment and I mentioned all the day two that doesn't go away. It's actually but get more complex How do I do upgrades and updates of that edge? The foam factors are not the same as you've seen here It's not the same edge foam factor if I running in a provider premise where we see edge cloud central office That can be a POC a POP point of presence or if a branch office and so on And it's not endpoint. It's not the same endpoint as the end service that can be like by the thousands and some from millions of devices So the third aspect is workloads, right? We heard that we still serving Legacy and traditional workloads in our cloud Now we need to basically construct the same workloads or at least strip them down to be able to run at the edge And I have his news for you guys. They're not the same Some of them are Clyde native some of them will only run Kubernetes at the edge connected to our open stack cloud some of them will run bare metal only or Containers deployed on over bare metal. They're not the same. So when we talk about the deployment It is actually connects to the workloads. What will dictate it is the workload if the workload is stateless Then we're probably gonna have a cache in that box with sometimes some of my customers talking about like half a terabyte This that's it. So how many images you can have cash that and that's it But I guess what the workload is stateless if that edge box is dead doesn't matter, right? We have another one that provide a service We're not trying to do dr between edge to edge, right? Our distribution models from the PO pot of presence to the edge This is a great Framework that the Arceano edge stack Working group put together and I really like to adopt it into Zoe how what we treat today also when we talk about storage So I would argue that open stack from day one was doing pretty much the cruiser the large pods and tricycle the medium Pods this is something that all of you guys are doing today. Nothing. New's day where we can handle it today When we talk about the first edge, right use cases, we're going to solve in open stack We're talking basically unicycle pod and satellite The last thing we're going to deal with is the rover, right? This is already the think about the customer premise thousands of hundreds of thousands of devices When we talk about Seth hyperconverge Where do we need storage co-located at the box, right the state's full application that needs that capabilities Where you want co-located now? I'm not sure if you know, but we already containerized open stack, right? Open stack can be deployed as microservices today. We can already deploy Seth containerize So that we solve that problem as we go to more closer to a distributed model We need to do more of this We need to co-locate as much as we can in a containerized fashion our deployment to solve that right now This is just a quick show of how the different use cases math to the deployment, right? So as I said the natural core and regional course stuff we do already today Right most of the public carriers public providers in the for America for example that use open stack and the majority are using open Stack already doing it today What we're trying to address in the near future is what we call the Distributed compute nodes that's pretty much the first line I'm going to do it and it's getting tackled two of the boxes that I mentioned earlier and that we're going to finish our talk going into the last one So as we start to talk about the storage we have a better understanding what the foam factors What the things we need to care care about I want to hand it over now to Sebastian to talk about what? What the meaning of basically of running Seth at the edge? Thanks, Sean. So now without we're gonna dive into some architecture considerations as As Sean mentioned as we move to the edge. They are really fundamental changes that will be Applied on your platform. So we're gonna go through some examples and when it's to be considered and when it's to be done in order to properly like deploy To the edge so well first and foremost, I guess it's a given now, but we really have to Start implementing hyperconvert infrastructures So we have been talking about this for a month and years now And this this has been like the real enabler for this kind of setups for once you go to the edge then Since since the requirements are really different from traditional platforms that there is no such thing as a high-high performance the big computation workloads or things like this that so we The way we Deploy the storage the way we configure it will be completely different and then We have to do HCI so basically HCI consists of Collocating compute and storage resources on the same machine. So in this particular case since again, we don't really have any big performance constraints or requirements then this gives us a better hardware utilization which is Realize thing to have The the types of the type of applications that we'll find when you when when you go to the edge Well, it doesn't really require any performance again. So from the VNF from All the things like caching this this will definitely happen on the edge But these are really lower for more services that will be running This is a little bit on the side, but It's also really handy to deploy this kind of infrastructure because if you want to do like a POC or or pilots then it's It's fairly minimal so everything can be contained into a single box or in three boxes depending on how Small you want to get but that's also really really convenient to get your hands on The environment the service the service API is to interact with them to configure your applications with them And also explore the different interfaces that are available once you go with that kind of stack you as a reminder we In this particular example, we are really focusing on stuff and stuff again is a unified system that provides Different interfaces to access your data so that can be through Object block or or fast system. So again, that's that's a really good way to get your hands on the new technologies If we dive a little bit into what a distributed compute node actually is it's typically This is the typical representation of all the services that will be running on those distributed compute nodes So again, you will find that we have Compute resources so all of your VMs and none and the services in the open-side services all of them are being containerized so The thing one of the major thing that is changing from the traditional way that we deploy open-side environments Is that in this particular example? We have demands are also being demand and the managers are also running so If not familiar with safe The monitor is the brain of the cluster the manager is responsible for gathering info and managing maps installing them And the OSDs are the object storage gemens which are responsible for basically writing reading and replicating healing the cluster so Typically when we deploy open stack We we have demands and the manager is on the control plan because they are like services They are controlling SAF but in this particular example because the part is at the edge and the control plan is at a different location Then all of the set cluster is being configured on that particular machine Obviously when you once you have this kind of setup again, you have a better resource utilization everything as a container so Everything is being isolated through namespaces So and you can all you basically get all the goodness of containers for for performing upgrades and even rollback if necessary So what does distributed HCI look like? so this is kind of a high-level diagram, but but you will be diving on On a more concrete diagram in a couple of slides, but typically the how that's going to be represented is that you have a centralized site where the control plan is running so Basically the control plan represents all the services API's there was no storage being involved into this this component and then you have different sites which which represents the the edges and Decide run the HCI nodes that we just discussed so Yeah, this is what basically summarizing everything I just said So one of the one of the challenges that we will be facing once when deploying this kind of infrastructure is that we have to find a way to distribute cloud images because again if we if we go back to this particular slide The control plan is over here, but the storage and the compute reside on this side. So it's kind of a tough question because even though we have the control plan that is Detached from the computing storage resources then we still want to have this ability to to have cloud images being replicated across all the edges and not being necessarily centralized on the control plane because Remember when you put any when you put a VM, then you have to If you're far on the edge and then you have to fetch that image Then there are things like latency obviously and that will be involved into the process and this might take a while so we we had to We we really had to to think on what to get a right design But it's an it's an initial step and again Julio will be diving a little bit more into this But this is primarily one of the biggest challenges we have at the moment is to to be able to Replicate images or at least have them being available. So at the moment We will continue to have those images on the control plane and these images will basically be fetched on the compute compute storage nodes Which is not necessarily the best thing but Because there are there are so many ways you could you could implement this but at the moment we They're we don't really want to disrupt too much to where things are being implemented. So The original design would be to have images basically being stored. This is this is What this is basically like this once we can get glance Having multiple back ends. So once we have that we can go further and with Seth really be able to Add images points to a particular location which represents in this case An edge and tell to glance. Okay, this image is part of that that edge And then replicate them through another way, but I'm I'm I'm diverging here So Yeah, but yeah, I know this is in glance, but it's it's an initial it's an initial step and as I was saying this is I Think it's the way I see it is that I want it to be implemented for Seth But obviously that's not the case because we have to have it in a way that is really generic. So Not everyone is using Seth although. I would love to see everyone being using it but at first Because the API is really generic. We have to implement this in a way that Any back any particular back end could consume this the same way and they have the the same experience in the end So that's a different way to put it to put it I guess So this is this is another diagram which basically captures How the setup is going to to look like so again, you will have this the centralized Place where all the the open-stack controllers are running Well, they they typically come by by three and then you have all the edges the remote site And this is basically a zoom out of what I just showed where you have the VMs You is decent in the months and then you can have as many as many pods pods edges basically as as you want But then I would like to to hand it over to Julia who will be Telling telling us how to get there. What are the new challenges that we will be facing? What's the state of the integration? What are the working groups and discussions that are currently happening So I want to talk a bit about how these concepts are translated into triple and what's the current status of things and a little bit about our ideas for the future and I would like to start from what has been discussed before in another session from the edge working group Because we are trying to join forces, so obviously and so the edge computing group has a few use cases one of which that the group mentioned it was the mobile service for 5g use cases and Came up with a diagram About the idea of how the specific use case could be implemented with you know open stock I would like to start from that diagram and also maybe compare with it and see how the remote is approaching the same issue And I will also use or at least I will try to use the same terminology Which is in the edge working group diagram, even though I'm probably more familiar with the three below terminology, but let's see so this was the diagram that was proposed like at the previous PTG and it's split mainly in three layers Basing mostly on the scaling factor So there's that there are expectations that on the five sides. You might have like something like a hundred different deployments And this what what's defined as the edge side You might have something like ten deployments and all of them are federated mainly because of the Authentication and because of the images in one main data center I Trying to match that with what is happening in triple We should be Implementing something that for the edge sites looks very much like the existing triple Controller so that's where all the open stock services in the API are deployed and we should be implementing at another let's say set of roles Which are Defining what the far edge side is and this looks a lot like a triple o compute node plus what has been discussed by Sebastian Storage persistent storage which eventually with Seth means co-logating compute and safe so we would be implementing More or less something like this So in the fire inside we would have not a compute new drone agents glance a bi caching and Cinder volume and all the set of services. Obviously. We're talking about the blinding containers Eventually across as relatively small set of nodes like let's say three And in the edge side, which is where all the open stock services are they would have the entire set of the ABI schedulers database Yeah, and orchestration like it I Want to look a bit more at how we are approaching the storage issues because this is focused on how we are using stuff and the How how HCI is beneficial in this scenario? So there is a There's mainly two issues one is with the persistent storage or seen there and the other one is with images and for seen there We opted for going active active in the edge side, so you might have like three instances running at the same time Which it's something that relies a lot on a correct implementation of the back-end driver for Cinder and We kind of committed on making Seth one of drivers that you know RBD I should say one of the drivers that initially is tested and Behaves correctly in active configuration. This was extremely useful To avoid pushing the need for pacemaker in the far-edge side, which really didn't want to because of the additional hardware requirements We also Well, we will probably work on a set of custom rules in three blow to make sure that you know If you have ten compute nodes in one of the fire each side, which is eventually not that small You don't really need to scales in their volume on all ten of them or the set point or some old ten of them So there will be probably at least a couple of roles while for the Farage site of like to point out that the way how we are grouping together resources is that safe Which will be an isolated safe cluster in every far-edge site is going to be I'd say the locality with the Nova nodes and the Cinder nodes is mainly given by the use of Availability zones and not regions so in the control plane you will see different availability zones for its far-edge site and Because of how it is implemented in triple you will be able to scale independently the central site from each and every far-edge site So there are no Changes to the far-edge site not even in the number of nodes that require changes in either the control plane or the other far-edge sites For image things are a bit more complicated that's something that Sebastian was approaching earlier So ideally for a back-end like safe you would like to use Yeah, a mechanism that allow you to they duplicate that I'm not really copy the same image over to every each side which We're approaching with RBD mirroring, but let's say the triple is not there yet at least not for the stain cycle So what we will see in the stain cycle is more like plans using cash is locally So that every image which every far-edge site every far-edge site needs will be initially pulled over From the central site, but then we'll stay in a local cache. So closer to the actual compute nodes This is also to Because it plays well with two interesting Challenges one is on one hand We want all the images that are in the central site available to the far-edge sites But we don't really want to replicate demo because we don't have as much storage and the other issue is we Let's say the Local cache is currently Well supported in dribble. So like we could get it done Relatively quickly and get it actually working in stain while using multiple back-ends that are no previously was pointing out. It's Too much for stain. Let's let's put it away. So it's a building block what we are putting in place now with caching It's a building block. We obviously want to get it better. It's just not Happening in this release and this is a diagram This is similar to what Sebastian was showing. It's just a little bit more detailed about how the services are distributed and So what do you have at the top is the ideal deployment of our control plane together with the under cloud This is not very different from what people does already today, except there are not compute nodes and There can optionally be a safe blaster if you use it for other reasons But and again, it's not necessary while in each and every remote site We will have a safe blaster some compute nodes Seen they're working active active and glance cache glance caching active. I Well if you have questions, I'm happy to discuss this later after the session At zoom on the issue that we have with the images is that Currently the the previous diagram would require Glance to pull the image on each for each site when the images needed the first time Some people asked why we're not like pre-populating the glance cache This is a bit like going back to the same issue of we don't have enough storage or we don't necessarily use all images in all Farage sites. So yes, pre-populating would help because on the initial deployment You would already have the image locally available, but it has some drawbacks Yeah, so This is how your for each site would look like and And I would like to hand it over to Sean again to discuss a bit more what is happening in the future Thanks, Julia. So we saw a lot of movement. I promised you right? This is like a complicated problem to solve but we're getting there and we're getting there step by step And you can actually help us So I want to start talking about the near-term roadmap as well the long-term Consideration so some of the work lights of our temporally edge forage disconnects I mentioned that earlier right some of the use cases that box may be disconnected from the net for four hours days weeks Suddenly pops up. How do we push the updates to it? Julia pushed about like initial right and we need to get all of this cached images there for the begin with Populated so that's a initial like at the chicken and the egg problem how we bootstrap that With the working edge a group is focusing on that aspect as well But some of the cases we've seen because of the workloads are completely stateless in some of the cases of the edge The fire edge is like we don't have storage requirements as I mentioned So it's just purely compute node. Maybe running a set one work a containerized workload At the edge or Bermuda as I said Happy can the right verge with set monitors Using container resource allocations right that we have a whole resource management that we solve with HCI in the main center Right because always going to be with HCI you fighting in memory. You're fighting CPU, right? So we still need to deal with it, but in a different way And so that's something that we will continue to look at and we need the ability to deploy multiple self classes We feel this is what you've seen that we're working already in the same release And finally the cash, which is how we deployed by default right today if you deployed glance the cash was not even enabled by default Right that's you as we go and deploy edge roles And we have a new role and by the way in the previous session of the work edge working groups people someone asked When we're gonna have like a stripped down version of open stack running only the services we need I have a news item for you That's what we're doing right now If you've seen in the ad site only that specific services that needs to be there gonna be there. We don't need the full blown Cloud at the edge And finally the glance image synchronization and recognition mechanism. That's pretty much this diagram We need to improve that way because that's gonna be key for our workload delivery at the end of the day It's all about the workload all about the workload how we can refresh that new workload Made a data and so on at the ad sites So if I started earlier and shown can see where we focus in at and the distributed HCI Initial focus is at the unicycle pod and satellite as we go deeper We are gonna address also the right side, which is the rover, right? This is the think about the end remote side closest to the customer premises We but in order to get there we need to go step by step Right and going back to what I said earlier about the last one right now We just distributed compute node model that Sebastian have showed we focus on this is we solving the satellite We solving the unicycle footprints, but we also want to start dealing with the distributed Rover right That can be deployed by the thousands and and so on and the objective is multiple standalone server deploy from a single location Right connected to a centralize on demand and re-synchronize the metadata, right? This can be a standalone server. Maybe just one box, right? Let's put it on the table with Compute sometimes with storage or without not all of the workloads required like stateless stateful and state And obviously the limitations AJ is a consideration But in some cases the way we're going to design the services I really don't care if that single box that dies right because I have other ones to take care of the service The key point what I said ever earlier when I showed you the goal of edge I want to can maintain as I move with my mobile device Maintain the service the experience should be what I care about, right? And that's what we're trying to achieve with every one of these footprints so To summarize open stack at the edge. We have more than one deployment it's not one edge that we care about and obviously We already figured out to deploy a lot of clusters. We've been doing it successfully over the years Now we're looking at the close edge Distributed node and finally we're going to get to the standalone use cases which is like one box, right? And I can tell you that our ecosystem For hardware providers are actually building but new boxes now So don't think about your regular pizza boxes when you service anymore There are We're going to talk about stripped down open stack stripped down hardware as well for that use cases So that overall transformation is happening now and the good news you can take Be part of it, right? That's the key takeaways. We're taking gradual steps. It's the first step We're taking about it's happening now So if you want to join us the edge working group is where we have rc Obviously channel mailing is and so on You can follow up us Any one of the specs that we just touched upon today. This is not science fiction. We're actually working on this And the good news we started this already at the stain ptg the Gathering and the efek pads are amazing So I the reason I put it there because you see you hear all the voices in the room And it's they're not still consistent But what's going to happen if every ptg we're going to go forward We're going to get consolidation and we're going to get prioritization What we can do next right and that's what it's all about and I want to take the opportunity Thank you for coming and open the bar For questions and I want to invite again my two distinguished colleagues julien sebastian Yeah, thanks for thanks for having us today. All right, please use the microphone If you have any questions, I think we have like five minutes. Yeah five minutes So in your last slide you had Two control planes the main cloud Yeah, so you had a control plate in site a and in site b Is there a plan for synchronization? Or those sites always going to be Monolithic so I have another slide in the my backup slot, but I took it off specifically that deals exactly with this One of the use cases we have is a centralized component with two control planes, right? The reason I didn't put it intentionally we're not there yet, right? So there's a we we still have to solve the initial use cases before we go Deeper to the because it's a different set of problems But it's not doesn't mean we don't think about it Right the the point I was making about the edge the ptg We list all the requirements there, but we need to start somewhere This is I will call the advanced use case already, but it's not something we prioritize to start with Yeah, no, I agree with your roadmap start small and We're out. It's good. Thank you Any additional questions? I saw that you have some sites where you don't Where you don't have control plane my question is How do you handle with for example rabbit mq connectivity when you have Latency issues on your network? How does it work? The existing workload remains up because it's not affected by disconnects, but yeah, the cloud is not Overable, I'll say you cannot really go and create new workload on a cloud which is disconnected anyway So Yeah, that's how it seems it doesn't work The workload which is already There remains active because there's nothing impeding the local self cluster to you know Deliver service or compute nodes to stay To keep the guests up. So the existing workload remains active, but The fireside event that case probably you should have a rabbit for each site or you didn't consider that right So there are many different Uh, we would have a similar problem with the database as well We could have a similar problem with the scaler there are there are Let's say there are pros and cons to every different solution and One of the requirements that we had that we really wanted to you know Not Need to deploy base maker in the fireside. So for the database that would be impossible Uh for rabbit probably is more reasonable because it doesn't really need base maker But still adds load on on a node or a relatively small set of nodes which in theory is just delivering a service. So But yes, uh We could we could we could play with it and my take is Uh, tribo is very good at that. Like you have a very Flexible way of customizing your roles and distribute services differently And so this is actually relatively easy to try with with tribo in particular Okay, thanks Last question Hi, uh, is there any intention to reuse the image cache? Or part of the image cache code for nova into the the caching that you want to plan in In glance Yeah, I think this is this is definitely what we want to improve Because currently we're not we're not anymore to We're not able anymore to take advantage of uh, the copian red clonings from south because All the caches the cached images are Again files in the file system where we we got rid of that But the initial implementation has just flat files So then each time you put a vm you will they will all be backed by this particular file Which is not ideal and the goal later is to once we fetch the image just Directly put it into self. So once you put your vm stand, that's way faster Yeah, that's uh, yeah, so we would like to take the benefits of saff obviously when we can but again, there's like Things we need to do first So we would like to thank you again for coming We're available here and you can follow up on twitter and feel free to join our discussion in the working group And so on and have a great summit