 So my name is Dan Smith. I work at Red Hat. I'm Belmiro Moraita. I work at CERN and we're gonna talk about scaling Nova with Cells V2. So I'll start out with some a couple of slides from my Boston Cells presentation just to kind of level-set architecture. Historically in Nova this is kind of what your deployment looks like. You've got some API nodes on the left which your users interact with. You've got a whole pile of compute nodes on the right and they intersect at the database in the message queue. And depending on how large your deployment is the database in the message queue become single point of failure and a point of contention for APIs accessing the database as well as talking to the compute nodes. So in the past Nova has helped to solve this with something we called Cells which we now call Cells V1. And this worked by, you still had your API nodes and you still had a bunch of compute nodes but we would shard those compute nodes into groups. And we would attach them to a database in the message queue that was really just for that chunk of compute. Reducing the amount of work and data that each one of those had to store. But because it was kind of a bolt on to regular Nova we still had to have this unified database that we put in front of the APIs. And the API nodes don't really know about this sharding and so they need to look at a unified database. So we had this component which was called the Nova Cells service that basically looked at all of these fragmented databases and composed the unified view into the top level database that the API could then use to expose to users. And the whole point of this was to provide a single pane of glass, a single appearance of a single deployment in front of basically chunked up pieces of Nova. And so the problem here was that there are several problems but one is that this router, this extra little piece basically had to implement every single feature that we had in Nova. It did it with separate code and a separate code path. And so obviously we started with some delta of features that weren't supported by this separate router and over time that only grew because people would implement a feature for Nova but they wouldn't implement it for Nova Cells as well since it had to be completely separate and so that gap just kept growing. The other problem here is that this proved to be quite lossy. It required a lot of manual human care and feeding to keep this running, to keep the databases synchronized to fix up issues when it didn't synchronize something it was supposed to as well as to hand synchronize things that it just assumed that you were going to synchronize for it. So we moved to this newer desired architecture which we now call Cells V2 which was similar. We keep the API nodes, we keep the compute nodes segregated the way we want them but instead of kind of building this glue to lie to the API nodes that were just looking at one Nova, we decided we would teach the API nodes about the sharding so that they natively could look at each of those databases and know that it wasn't just one database being glued together by this lossy cell service. So in V2, this is kind of what your services look like. You've got a top layer of services, your control services, you got your API nodes, your scheduler, it talks to placement. Those services all have kind of a global view of the whole deployment and they know how to talk to the pieces. There's still a database at the top but it's not the database that we had in V1. So it's not an aggregated view of everything. All of the data in the deployment all squashed into one thing. It's really just data that is global to the deployment so flavors and things like that as well as some mapping information to know where an instance was whether it was in this cell or that cell. And then down below you've got what we now call the Cells which used to just be the Nova database. And here you've got compute services obviously because the whole point is to shard this out. You've got some conductors down here as well to do some of the task offloading for these things. And then you've got the main database and the main message queue for all of these compute nodes. This is where the instances live and the API knows how to talk down to each of these. So design and development tenants for V1 having come from V2, having come from V1 we had kind of some things that we wanted to accomplish with V2. So one probably the most important one is that V2 should not be an opt-in different code path. You shouldn't have a Nova deployment that either is cells or isn't cells and runs different code depending. That's the whole point of redoing it really. Full upstream testing which we definitely did not have with V1. We had some testing eventually but it was extremely minor and obviously moving to this unified way of doing things means that we could get this upstream testing. And we wanted to not have cells be visible to API users meaning the whole point of this was to provide a unified single pane of glass in front of a Nova deployment. We didn't want to push the burden onto the users to need to know that all of these pieces were separate. We wanted to get rid of that replicating service. So a big problem with that was either not replicating data in both places or replicating it and keeping it in sync. So the point here was to decide for each piece of data that Nova keeps track of, is that a global thing or does that live in the cells? And of course, aim for no only supported without cells kind of features. So I said the feature gap with V1 was large. It grew over time. We also had some features that technically worked in V1 but behaved differently. Weren't atomic if you were in V1. User obviously has no visibility into that. So we're trying to avoid having any of those kind of things that behave differently. Obviously, the point here is for performance to scale Nova. So we wanted to optimize this cross cell type of behavior, specifically instance operations are the most important ones because that's what users are doing and that needed to remain efficient. And we said that we would introduce caching and fault tolerance and kind of additional things on top of that to solve hot spots and performance issues when we got there. Try to not pre-optimize but let people deploy it in their environment and find what the issues were and we would take care of it when we got to that point. So obviously, this is a huge challenge. The goal here was that we had two camps of users that were running Nova in different pieces of Nova and we were trying to get them all onto a unified set of code, a unified architecture. We had people for which cells V1 was never going to be a thing. It was too complex. The feature Delta was a problem. They didn't have the manpower to do all of the hand care and feeding and constantly patching things up. And on the other side, we had people for which cells V1 was necessary. They needed it for the scale. They were willing to not have some of the features. They were willing to dedicate staff to kind of keep it running. So obviously, it's a challenge to get those two very conflicting ways of doing things on a single set of code. We had to be able to provide a transition for those people. So nobody was going to throw their deployment away and move to a completely different way of doing things. So we had to come up with a way to get regular operators operating in this kind of mode but without additional run time requirements for extra things they have to do. And obviously, we had cells V1 operators that had huge deployments and they were willing to put up the layout of the staff to keep things running or whatever, but they've built this big deployment and they just can't obviously throw it away. So that was obviously a challenge but even still with refactoring all of these internals of NOVA to get to this point, we had to do this over the course of many years but without just freezing NOVA. So lots of other things were happening inside NOVA while this was going on. When we started this API microversions weren't even a thing, placement wasn't a thing. These are all other big things that have happened in NOVA in parallel to all of this refactoring which was a huge challenge to not have it all fall on the floor but also provide a path for people to get from pre-cells to cells V2 without having to throw anything away. And of course, the world changed underneath us while we were doing this. We started this with a kind of very conventional rock space initiated way of thinking of a public cloud region was a huge deployment in a data center and the cells split was really just for scale and now we've got people talking about deploying 70 clouds across, sorry I'm not giving away the, but lots of people, lots of cells geographically distributed. That wasn't a thing that we really were thinking about and all this edge stuff with people talking about very small deployments but across a WAN link and things like that. So all of this was going on in the background while we were trying to affect this change. So how to go? I think mostly good. We introduced bugs and code churn obviously because this is such a major undertaking. I think we made it through most of that stuff. We came up with solutions for all the things that we found. We had some maybe rocky, not rocky, capital R, but some bumpy releases where things happened and people deployed and we realized that there were things that needed to be accounted for. But overall, I think we've made it through the really difficult bit. Regular operators have a little bit more that they have to do mostly in the setup phase but not really operationally, which was the goal. Existing Cells V1 users had a big transition to have huge deployments, lots of cells, lots of data and it obviously is a lot of work to reshape the cloud on top of this new architecture. We had a lot of challenges where people built a cloud on Cells V1 and kind of digested some of the quirks of that architecture into the expectations or the way they handle things. And obviously V2 was a very different architecture with things living in different places. And so there was definitely some bumps in the road there, but I think we made it through. I think another really good thing that came out of this was some of the cleanups and stricter rules about where we put data and NOVA and how we classify it and how we talk to it and when we talk about it. I think that in addition to being able to do this split for scale, I think there's some non-scale things like federated NOVA that we've kind of classified these bits of data and put them into buckets, which means that we can talk about them a little bit more abstractly, which I think was an unintended but good side effect that came out of this. So, Status and Rocky, fully developed, tested in mainstream NOVA. Anybody that's running a recent release is running Cells whether you like it or not. You might have only one, but that was the goal, was to get everybody there. And we made it. We have pretty good multi-cell performance. We've spent a lot of time optimizing those instance operations, like I said, that are very important for kind of the front line of what your users are going to hit. We still have some admin type operations that are a little bit more naive, but those are within the control of the operator, so hopefully a little bit less important and on the road to being fixed. We have a few remaining functions that still don't quite work exactly perfectly in Cells V2. So it's an extremely small subset. There's work arounds for most of them, and we're kind of punting a little bit to solve those in the right way instead of trying to come up with a collugy way of resolving those. Like I said, performance is pretty good now. It's been rapidly improving over the last several cycles. Each of the last, like, three cycles, people have done serious performance testing and come with numbers, hot spots, and we've addressed a lot of those, which is cool. Fault tolerance is still naive, but there's a lot of work going on in Rocky to make that a lot better. So what's next? Cross-cell migrations was never a thing that V1 was really ever going to have on its radar, but it's a possibility in V2, which is cool. So this is further eliminating any sort of artificial restrictions that exist with you having your deployment split up like this, which is cool. Like I said, fault tolerance improvements, handling a down-cell and the API is still being able to provide as much data as possible, as well as some issues around quotas and making sure that that calculation happens properly when Cells are down, but plenty of room here still to kind of make incremental improvement for performance and fault tolerance. And one of the things that's nice about this new architecture is that the scheduler and placement have a global view of all of the hosts. And so while that presents a little bit of a scaling problem, it presents a nice ability to be able to do smarter placement and things like affinity guarantees in placement with a high-level view of the cloud, which is cool. So thanks. So, hello. So what I'm going to do is to talk about our experience running Cells V2 during the last six months. But first, a little bit of context about what CERN is and what we do. CERN is the European Organization for Nuclear Research. The main goal of CERN is to do fundamental research, especially in the particle physics field. If you are wondering what is this picture, so this is the Large Hadron Collider LHC. It's the biggest particle accelerator in the world. It's a ring with 27 kilometers. All of this is 100 meters in the ground. It crosses two borders in the France and Switzerland. And this big blue pipe that you are seeing there is basically a huge magnet, a superconducting magnet. And to operate, it needs to be cooled down to minus 271 degrees Celsius. So a lot of liquid helium to cool this down. And inside, there are two smaller pipes that is where the beams of particles travel and they are accelerated to very close to the speed of light. These beams travel in opposite directions and then they collide inside huge detectors that are also 100 meters in the ground. And when they collide, you can see all the bunch of particles. What these detectors do is basically taking pictures. They are big digital cameras, but they take 40 million pictures per second. This produces up to one petabyte of raw data every second. Of course, you cannot store all of that. So they are filtering and what we store is just a few gigabytes per second. And then all this data needs to be analyzed and most of them is analyzed in our certain clouds that runs on top of OpenStack. So this is one dashboard that gives you an overview about the size of our clouds. So we have around 3,500 users, more than 4,000 projects, around 36,000 virtual machines running, more than 9,000 nodes. Hypervisors was more than 1,000 bare metal nodes for Ironic. In terms of course, we have around 300,000 cores. If you saw previous presentations from CERN, the number of available cores actually went down. This is not because we removed capacity nodes from the cloud. It was because the recent issues like L1TF that we needed to disable SMT in the bigger part of our clouds. So that's why we see these decrease of number of cores and you see these overcommit and the cores used. Just for you to know. So cells at CERN. We are running cells basically from the beginning of the CERN clouds since 2013. So why we need cells? Why we are using cells? There are several points. First, we wanted to have only one end point that we offer to our users. We have two data centers, one that is in Geneva, Switzerland, the other that is in Budapest, Hungary. And we did, at that point, we didn't want to have two different regions or even two different end points to offer this to our users. We wanted this to be completely transparent. And cells actually allow us this. So we started with only two cells, one in the data center in Geneva, the other in Budapest. Availability, resilience. So if you have cells, basically what it allows you to do is to split your infrastructure, question your infrastructure in small sets of nodes. And this allows you to scale your new infrastructure. In our case, each cell has around 200 nodes. We have a lot of them. We have at this point 73 cells. And basically they can act as failure domains. So because a cell is more or less isolated, you have the compute nodes, you have the control plane. And if that control plane fails, only a small part of your infrastructure is affected. And depends what kind of workloads is there, could be a high priority or not for you to actually to intervene. One nice feature that with cells, in this case I'm talking about cells V1, this was 2013, is that we could dedicate workloads projects to specific cells. So we have different hardware types, hardware is bought for different projects. So we could actually dedicate cells Nova to move to create all the instances from this project in that particular cell. And we also separate other types for cells, because it's easy to organize the pre-kate hardware. And it's actually very easy then to introduce and evaluate new configurations in Nova, because almost cells are more or less isolated, means that if we want to introduce a new configuration option and test it at scale, we don't deploy this in the entire clouds, we do this per cell. So we introduce the new configurations in few cells, evaluate scale, how it performs, and then we can roll out these two more cells, eventually to all the clouds. So a lot of advantages but a lot of disadvantages as well. It was an operational nightmare, as I call it here. The main reason is it was not really maintained upstream. Why? Not a lot of deployments were using cells. I only knew a few of them, only big clouds were using cells. Also there were a lot of functionality that was missing. Simple things like aggregate support, server groups, security groups if you're running Nova network. All of that was not available if you're using cells v1. So what these deployments that were using cells started to do was to do patches, basically, to have this functionality. And we start having different patches to solve a different problem. This was actually very hard then to move this code upstream because of lack of testing for cells at that time. So we end up solving the same problem in different ways and adding the specifics of each deployment. So as you can imagine, up a grade in this situation it was very, very challenging because you needed to make sure that all your patches will work in the next Nova version that was not done considering that. Other architectural aspect of the cells v1 was that the bees were synced. There was a top cell DB and then all the Nova cells DBs and they were synced. And sometimes that failed and DBs were out of sync. So our journey to cells v2 we started in 2013. Only two cells at the time, few under nodes in the G release. We upgraded and all releases between Grizzly to Newton. And at Newton we start thinking what we need really to upgrade to cells v2. Pyke was the first release that allowed multi-cell deployment with cells v2. So at that time we did a lot of work to upgrade to Akata and then after a few weeks we upgrade to Queens with the cells v2. We did upgrade and migration at the same time. At that time this was April this year. We add 70 cells, more or less the nodes that we have today. This was very challenging if you want to know more about how we did this upgrade and this migration. I did a talk last summer describing this. But what this is about today is cells v2 and our experience with it. So why we are so excited about cells v2? First of all we start using all the code from upstream. We've reduced a lot the number of patches that we have for very simple functionality. To a flavor, for example, that was not available on cells v1. All the nova deployments now use cells. I think it was started in Newton. Every deployment needed to move to cells. So almost everyone now is using cells. We are not in the black hole anymore. This joke is not because we are at CERN. Finally we have the full feature set that we can use. We have the promise of saying dbs because there is no synchronization anymore between the top and cell dbs. Actually this concept doesn't exist anymore. And actually you could now with cells v2 do rolling upgrades that in the past with cells v1 was not possible. Basically we needed to shut down the entire cloud, do the upgrade and then pry a little bit and turn everything on and see if it was working. So we are so excited with cells v2 all these advantages that we move, if you notice, we move very fast to cells v2. Of course we knew that being one of the first, I think we were one of the biggest clouds moving to cells v2, we probably will find some issues doing this but we were willing to take this risk and actually help in community and work on that. Of course as you can guess we identified few issues at scale that basically only if you are running thousands of nodes and several large number of cells you will notice this but because of the work of all Nova team, it was debugged and most of it is already fixed in Rocky and most of them are also back ported to Queens. So what I'm going to do next is to show some issues that we discover in our cloud after the migration to cells v2. I have these dramatic titles but just to grab your attention. So hot databases. You can see those flames so what this is about. In cells v1 we had this top cell database and all the Nova API calls were basically querying that database meaning that all the Nova databases forced the cells, each cell as one database, were not doing much. Basically very very little was basically only the synchronization. So in our deployment and all cells v1 deployments we had a very performant db, top level db basically to observe all the requests, all the queries from the APIs. We were aware that the pattern with cells v1 changed so now with cells v2 when you do a novel list or a novel boot what happens is the APIs need to go through all the Nova databases in the cells. Even if we were aware of this we over estimate the impact that scale. So what happened is when we upgrade to or migrated to cells v2 we saw a huge increase of load in our cells db's and they were not copying well with this. So simple operations if you're using cells v2 like novel list the request goes to all the cells db's and if you have a lot of them actually needs to query, do a lot of queries and at the beginning in Queens these operations were sequential. So you start adding time and time this was also taking some time to the request to finish. All of these actually it's already fixed in Rocky. I don't remember if it was back parted to Queens. These graphs that I have there this one is the most interesting one. This is the some metrics from one of our cells database and this one represents the number of connections and this is when we migrated from cells v1 to cells v2. That one is before the upgrade and you can see when we enabled the APIs to only a few users. So basically initially it was not doing nothing and that then the number of connections increased enormously. But we are not expecting this. We knew that this will increase but this much we weren't aware. So basically we needed to change our database configuration, how we set up our databases to support of this. So be aware if you are moving to cells v2. Most of this actually it's fixed in Rocky because now Nova instead of going through all the cells only selects the cells where your project is instances. So it's much it's improved a lot. Another dramatic title, DB down, cloud down. So what this is about. So basically this is more or less the previous issue. So if you do a simple operation like novel list, Nova goes through the cells DBs to complete the request. Novel list needs to go to DBs where your project is instances to give you a result. Because we are not in cells v2 duplicating this information so that's why Nova needs to go to these cells DBs. And of course if one of these DBs is down, Nova cannot give you an answer because doesn't know how many instances, which instances are running in that cell. So it will fail. The problem is if you have a lot of cells, it's almost certain that some DBs will be down, something will happen eventually. Considering the cells v2 architecture, so there is no perfect solution to handle this issue. Actually Nova team recommends that you have a full tolerant solution for your databases, which is reasonable. In our case it is not very feasible considering the number of mysql instances that we have. Currently we are running 73 cells if we need to manage clusters or even a replication of each database. It's an operational overhead. So we are trying to, with few compromises, get Nova not to fail when a DB is down. So there is this spec how to handle a cell down. Everything is described there. Please go through the spec. But basically the main idea is if a DB goes down, Nova, if you do a novel list, Nova should give you, if it doesn't have all the information, at least should give you the minimal information or some information about your instances, everything that it notes. And actually Nova knows some things that are stored in the top, in the Nova API database, like the UID. And for most of people, at least for us, that is a good enough solution or a good enough compromise. Another thing is scheduling. So we came from the sales V1 world. So doing that, sales V1 has not only one scheduling level, but two scheduling levels. So first, in sales V1, if you are trying to put a new instance, the first scheduler will select the best cell for your request, based in filters that you defined or the mapping that you have. And then the local scheduler in the cell will select the best node based in the filters that you enabled. This concept doesn't exist anymore in Nova, this two level scheduler. So now everything is global using placement, meaning that things that we are used to, like having special configuration for the schedulers in particular cells, those things are not possible at this point in sales V2. For example, the PCI pass through filter that in the past was only enabled in one cell because it's the cell where we have GPUs. Now it needs to be enabled in all the schedulers globally, just because we have few nodes with GPUs. And of course, this has some overhead in the scheduling time of all the instances. We are discussing this with Nova team to improve this thing, for example. Other was we do this mapping between cells and projects. And the placement initially and Nova scheduler, they were not aware of this. So actually this was very hard to achieve with cells V2. But for Rocky, Nova team implemented this feature, the request filter that allows to do this mapping using aggregates and placement aggregates for not only for projects, but also for availability zones. Of course, this work was done in Rocky. Of course, we backport everything to Queens and we start using it by the way. And actually works very well. Pretty good. And what it actually allows us is to filter the number of requests from placement. So previously, if we are trying to put an instance in our clouds, the scheduler will ask placement for location candidates and we will get a bunch of nodes. The default is 1000, which is a lot then for the scheduler to go through. With this filtering, basically placement now is only giving the nodes for that particular cell because the mapping or for that particular availability zone that the user requested. However, in our case, if a user tries to create an instance in availability zone A, for example, placement can send up to 800 compute nodes or location candidates for that VM, which is still a lot for the scheduler to cycle through. There is this feature or this configuration option, the max placement results that we set to a very low number. Actually, it's 10 what we have and this improved a lot the scheduler performance because then placement only gives 10 results. However, it builds some issues. For example, if you try to limigrate an instance with the final target, these requests needs to go through the scheduler and then schedule your last placement. And if that node that you defined is not in the set of nodes that the placement gives, in this case in our case 10, the request will fail. This is already fixed in Stein, I think. I think it will be it's back porting to Rocky. It was the same for rebuild. And other minor things, if you are running a call for a long, long time and running the archival instances, you will have these orphan request packs and instance mappings that there is nothing at Novenau that removes it. So in our case, we have thousands and thousands of entries in the database with this. So we need to find out a good solution to the little of this. Slow availability zones, if you have a availability zone list goes through all aggregates and through all the services. And if you have huge aggregates, this could take a lot of time. And that is particular noticeable in our horizon because then the drop box to select the availability zone is not available. So users complain. The scheduling time, we notice this because we run cells V1. It's in our case higher than in cells V2 than in cells V1. We need to make an effort to look deeper into this. We are trying to help. And don't expect always a consistent state from a databases that is five years old, especially the running cells V1. We notice this, for example, in this simple operation, delete and aggregate those that fails because Nova is not able to find the service in the cell database. And yeah, this is not consistent. But Nova should not be, shouldn't block in these kind of operations. As you can see, these were, well, few issues, but they are already fixed in Rocky. Most of them backported to Queens. Thanks to very good communication and collaboration. I think it was very good. So we are continuing this trend to upgrade very fast Nova to the latest release. So two weeks ago, we just upgraded Nova to Rocky. So even if this is not really related with cells V2, it is a little bit because it's our first upgrade using cells V2. I wanted to tell you our experience on this. So in the past, upgrading Nova with cells V1 required months of work, planning, making sure that all our internal patch will work with the new release. It was a heavy operation. The upgrade itself could take up to one entire day with the APIs on time because at that time, we needed to upgrade all the nodes in the entire cloud because this RPC versioning in cells V1 was not really we didn't trust it. Sometimes it was working for some operations, but most of them will fail. So we needed to upgrade everything. It was a very heavy operation. This time with cells V2, we did upgrade to Rocky with only one hour of API downtime. And because we were conservative, we really wanted to do this. It was our first time upgrading with cells V2. So we want to do it slowly. We upgraded the control plane and then we let the compute nodes upgrade themselves during the next 24 hours. So what is our control plane? It's VMs running in the infrastructure itself. So each VM has four visual CPUs, like gigs of RAM, nothing special. For the APIs, we have 16 VMs, only running Nova API, then 10 VMs running Nova Conductor, plus Nova Shedmore, and another 10 VMs running Nova Placement API. And keep in mind this number because for the next slide this is important. So 10 Nova Placement APIs for the entire infrastructure. And then we have 73 cell controllers. When we upgraded to Queens and migrated to cells V2, we had 70. Actually, during these six months, we actually had more three cells around 600 new nodes into the infrastructure. So all of this was upgraded during this one hour downtime. Of course, we did the DB sync the day before, work upgrades, and we set the upgrade levels compute to auto. So everything was working fine. We set the compute nodes to upgrade during the next 24 hours in our automated system. And this is not related to cells V2 at all. This is a scaling issue. But it's very interesting. So the compute nodes start to upgrade to the rocket release, and in your left, what you can see is the number of placement requests before the upgrade. So before 8 a.m. And then after when the compute nodes start to upgrade. And that was the normal world for us in Queens, less than or 1.5 million requests. And then when the compute nodes start to upgrade to rocky, you see the number of requests to placement to increase a lot. In this one, what you see is the time that each request takes in placement. So compute nodes start to upgrade and you see that the request time of course goes up. We also have graphs about the CPU load of the nodes. It goes to 100%. These were these 10 nodes that I mentioned in the Nova Placement API. And you can guess when we add more capacity to placement infrastructure, because then the request time goes down. This was around 8 a.m. and the next day when we arrived to the office. And then the requests continue to grow because it was not finished the upgrade in all the compute nodes. So basically now we are running instead of 10, 30 placement nodes to observe all these load. And we have been discussing this with Nova team, placement team to see if we can improve this placement now. It's very shatty. Of course, during that period we had an impact in the VM scheduling because placement was very slow. Another issue that we hit with the upgrade with Rocky, to Rocky, that is not selfie too related. It was Nova compute that we have with the ironic driver enable because all these new placement requests takes a lot of time to go to the full cycle through all the instances that this Nova compute is managing. So in our case, we have one Nova compute for more than 1,000 bare metal instances and this takes a lot of time so Nova reports these services as not available. And basically after we look into these issues Nova now is working fine. So I think the main message of this presentation or at least of my part is that CERN cloud is running Nova Rocky with sales V2 which is pretty awesome. Of course, we found some issues in Queens. This was expected. Most of them are already fixed with all the collaboration with the Nova team. Thanks a lot. Sales V2 works at scale. We have 73 cells more than 9,000 nodes. It works fine at that scale. Of course, they are improvements in performance and we are being worked on them. No more code and craft for us like in sales V1 to have basic functionality. Everything is there and much easier upgrade. So thanks all Nova team for this. Thank you. So I think we have some questions. Sure. Hey. I can't get over the mic so I'll just speak loud. So is the lesson learned on those? Yes, it will overwork. So basically that is the idea that I'm saying this year. The number of the placement requests increases a lot so be prepared for that. Depends on the size of your cloud. If you have a small cloud you will not know this, right? Any more questions, please? You need to speak to the mic otherwise I cannot hear. Sorry. My question was how do you address the networking aspects, port creation, deletion when you use sales? Do you also do split out the networking side? So the cell concept is only a Nova concept. Neutron is not aware of sales at all. So for us, Neutron, it's a global service. So it'll be great if you could split the load in Neutron per cell but at this point it's not possible. Support creation and deletion is not sell aware at all. Is there another? Okay. Thank you all. Thank you.