 Okay. Thank you very much for coming. I know it's the end of the day, and it's great to have a full room. We'll try and get through this quickly so you can get the beer from outside and go to the parties and everything else. But hopefully this full attendance reflects the importance and interest in this topic, which we're definitely seeing out in the field. So just introduce myself, Neil Levine. I do product management at Red Hat for Steph. I have Sean Cohen who's on the product management side on the open stack side, and we have down there Gorka who might join us for a little brief demo halfway through who's one of the core developers on Cinder. So this talk is, we're just saying here, we did it, I think it was a last summit. We'll probably do it again at the next summit because it's definitely a work in progress. For those of you who went to the Cinder talk this morning, we'll have heard some commentary on the state of volume replication and volume backup and so on in Cinder. So we're going to cover some of the options that we think are out there at the moment. Some of them work pretty well, some of them not so perfect, and we're going to show you where we're going here. So first of all, we'll explain a little bit about just how open stack and Steph works from a multi-site perspective to start off with. Does assume you have some basic knowledge, but we'll go through some of the components. Then we're going to go through four use cases or topologies about how we think you can lay this out here, which have varying levels of complexity and trade-offs and elegance. I will sort of paint the picture of where we think this is going out certainly in the Liberty session towards the Nirvana of perfect multi-site and disaster recovery. So with that, I'll hand over to Sean to kick things off. All right. Thank you, Neil. So the first topic is of course looking at the open-stack disaster recovery. This is actually my third talk on the subject, and as Neil mentioned, this is an ongoing effort. We cannot finish it in one cycle. That's obviously, and it involves a lot of services. Yeah, it's blocks, storage, it's just a simple piece. When your site goes down in flames, you need to all the services being available on the other sites actually to restore, and we need, there's a building blocks to take place. We look at the different topologies we have for open-stack. So it starts pretty much what I call disaster recovery for the poor. So I have a cluster, let's stretch it between two sites and see what happened. Then we have one open-stack cluster, and of course, moving to two open-stack clusters. When we look at the R configurations, this is where RPO and RTO comes into play, and I have various choice all the way from having an active and cold standby to an active and hot standby, and of course, the holy grail is active-active, which typically costs more. It seems go if you look at Amazon, for example, or any other commercial offering for Cloud as you would like to get more RPO and RTO, you will have to pay more. It has to do with the investment in hardware, but it's not all. So if we look at RPO and RTO, the basic thing you could do is just have tape back up and ship it with your truck to another site. That's the baseline, and then we move on to if I need more, then I can do replication, which is going to be a synchronicity, synchronously, and then I move up and up, and of course, mirroring is like the top one. But even if I have mirroring, which is continuous mirroring from one site to another, if I have a failure at the application layer, it will be copied as well to the other site. So you might have another disaster in the other site as well. This is why you need to have snapshots to protect you in both sites. You can actually restore the last operation. So that's pretty much where we are in general, but what are actually involved when we talk about disaster recovery with OpenStack? So if you look at the ingredient, data is not enough. Let's say I have a method to replicate my data to the other side. Am I done? No, this is just a start. So I need to capture the metadata of the relevant service workload resources in order to be able to restore it in the other side. I need to ensure that the VM images are present in the target site. I'm not even opening it here, but we have different configuration in the two sites. Meaning if the VM configuration has different IP settings, I need actually different settings in the other side. This is where we need to leverage heat, for example, to deploy different configurations in the target site. So replication of the workload data using data replication as a start, we can have application level replication or backup restore, which is the initial fundamental step to disaster recovery. So let's start with the OpenStack components. And as you know, Cinder has its own API for doing backup. And what notable milestone in the Cinder backup progress is actually the ability to do a backup with the metadata state. So if I'm actually backing up my volume, I can actually back up the metadata as well. So if I import it even to another OpenStack site or even brand new install, I will be able to take this volumes and restore them. In terms of AJ, just like any other service in OpenStack, we have working in AJ pair, but within a single site. Today, Cinder doesn't have the notion of additional Cinder in the other end to talk to, right? This is my own domain, I'm just one Cinder. So we don't have inherent multi-site or disaster architecture built in to Cinder topology. When we look at the APIs, we have volume migration already built into the API. There's other work to be done there, but that's an important milestone. We have the backup API as I mentioned, and we have the volume replication API that was started to add in ISO. There's more work to be done as we're gonna move on. And when we look at the OpenStack glance, the image repository, right? I need to take the images and make sure I can actually input them on the other side. Similar to topology, no notion of AJ, we have a notion of AJ pair, but with no notion of a multi-site. And of course, we have the Gliance API that we can utilize. Moving to Nova. So here we need the metadata of the database and the volume, right? Because if I need to restore mobile in the other site, I need to have the metadata kept or backed up. And in terms of topology, same thing, right? AJ pair, single-site, no inherent multi-site. When we talk about cattle, right? If should I back up my ephemal volumes? We wouldn't recommend it. You can actually use Snapshot to protect the ephemal volumes for your boot volume, but that's not something I think we're in the scope. And of course, this is where Snapshots in Glance comes handy. With that, I'm gonna hand it off to Neil and I'm gonna cover the Ceph components. So Ceph is slightly different to OpenStack. It's got a very different architecture. I won't cover that. But the main way that Ceph is used within OpenStack and the reason it's become so popular is as a common storage backend, both Glance and Cinder and Nova. And this is because it does copy and write. It can do thin provisioning. So it's an incredibly versatile and sort of good way of doing storage for all of the different services. The main mechanism we have right now with RBD is what we call RBD exports. And it's a sort of a fairly low level tool where you can actually export an RBD volume as an image. It actually comes out as a file that gets put into, you know, standard out. You can pipe it into standard in somewhere else. So it's pretty low level, but it makes it versatile. It means you can encode it into scripts. You can use SSH pipes and so on to sort of move images around. And critically, when you do these RBD exports and imports, they're incremental by default. So the first time you run it, it'll take a full dump of the image. The second time, third time, it'll be incremental only. So you don't have to worry about, you know, am I, is it incremental or full? It'll just do it incremental for you by default. So it's a pretty versatile. What we're currently working on, or I say we did, the engineers are working on is what we're calling RBD mirroring, which is sort of the next stage of that, which is volume replication or to work with volume replication API in Cinder. So right now we're still putting in the low level sort of foundations and we're sort of targeting that work for the beginning of next year. But this will allow us to sort of do effectively live streaming of an image from site A to site B. So you don't have to say go and back it up. You just say once replicate it and that's ongoing for any image within a pool. And if you're interested in the architecture, please join the mailing list where you can sort of get involved with, you know, suggesting how you want to use this feature. But that's currently in progress. People also use the Rados gateway or RGW, which is the Swift and S3 interface to your CIF cluster. So this is typically used by tenant applications rather than the underlying infrastructure itself. If you're using RBD everywhere, you may use RGW for your backups as you'll see. But typically this is about your user's data. The multi-site capabilities there, we have what we call sort of the V1 implementation, which came out about a year and a half ago, which is basically it's an active passive pair, but explicitly between sites. So an eventually consistent layer that sits on top of your CIF cluster, which will mirror the data from site A to site B. I think it's fair to say it was kind of like, you know, we call it a V1 with all the implications there. It works, it's a little bit sort of brittle to set up. But once you go up and running, it does its job. But what we're working on now is sort of a V2 implementation of that to sort of get to a sort of a full mesh active cluster design for multi-sites. So slightly closer to kind of how Swift does it, but still with the strong consistency of CIF at the low level in each individual site. Okay, so armed with knowledge of OpenStack, which has, you know, databases filled with metadata and they're running in HA pairs and they have some API capabilities and sometimes they don't. And hopefully armed with a little bit of knowledge about CIF. We're gonna explore sort of some of the different use cases. And we've got four that we've sort of lined up in order of complexity and sort of features. So the first one of course is, you know, the most obvious thing, hey, I just wanna take my clusters and stretch them. This is not recommended. Both OpenStack and CIF were not really designed to run across WAN links with high latency, you know, across geographies. They can work in campus environments and we definitely have users doing that in campus environments. But you gotta pay a lot of attention to the latency of your links, the reliability of your links and so on. So it's kind of, you know, phone up for advice from consultants and everything else if you're thinking of doing that. But as a first order step, this is not really what you should be doing if you really are thinking about sort of proper DR between continents or between coasts of a country and so on. If you do it with CIF in particular, you have to pay a lot of attention to where you're putting your monitors. You know, there's a lot of settings that you have to sort of tune to make it aware that it's operating across high latency links. But not really recommended. So the first use case is really just using Cinder Backup, which, you know, we call this the Control-Z capability here, which is a very user-centric thing. Your user is deleted a volume by accident, you know, operator error on the horizon console or wherever it happens to be. And you want to sort of give the user an option to sort of get back from that. And that's really what Cinder Backup is for. So here you're gonna take one OpenStack cluster, you're gonna keep it in site A, but you are gonna provision two separate CIF clusters in different physical locations, one which is the origin and one which is gonna be for the backup. So whilst this has kind of been designed for end users, we have some ways of allowing the admin to take advantage of it as well. So here's a kind of a very simple map of the topology. And yes, it's simple, which means it's simple to set up. You're just relying on the API. But, you know, not that fully featured in terms of sort of giving you a sleepless night if you're an admin here. So the issues you have with this, of course, is your Cinder service in site A has to be very, you know, that's the authoritative one. You got to make sure that the data is looked after, you're sort of keeping all the metadata safe and backed up and so on. You can pick different back ends, doesn't have to be SEF, doesn't even have to be RBD. You can use RGW or OpenStack Swift or any other objects, or even S3, you know, Shim to sort of to move your backups over. But as I said, if you do use RBD, it's incremental by default, which is a huge benefit if you're moving very large volumes across. The limitations when you're using Cinder backup are that, you know, the volume can't be in use. It has to be unmounted. It doesn't really handle backing up of snapshots. It's a fairly primitive API call, which really just works for sort of simple, unmounted Cinder volume. You want to send over to site B. So again, as a user, you can do that through the API or using Horizon. But the goal is to make admins take advantage of this with this topology to give them some sleepless, you know, get over the sleepless nights they may be having. And I will introduce Gorka to explain some of the work we've done here to improve this tool for admins. Yeah, I think everyone will hear me. Okay. Okay, in this next cycle, we are going to address some of the missing features in Cinder backup. For example, we are going to decouple the Cinder volume from Cinder backup service. So Cinder backup will be able to scale out. We'll also include the snapshot backups. And we're looking into scheduling the backups as well. In the meantime, how can you solve these missing features is using, for example, CronJob, Cinder, Client, Python API, and some scripting. This way, you can create automatic backups. For example, the administrator can backup all tenants' volumes and choose whether he wants to make these backups visible to the tenants or to keep just them hiding. You can also export all the backup metadata so it can be automatically imported later on. And for example, you can, as well, create a backup for induced volumes, which means that you have to create a temporary snapshot and a temporary volume to do the backups and then delete them. And you can also control how many backups you want to keep in the backend. So then, with a sliding window, and you delete the other ones. And now we are going to see a small demonstration on what this kind of script would look like. And something important to notice is that when we are going to run the list commands is three commands altogether, which will show the sender volumes available. It will show, as well, what the sender backup sees as the backups and what the script sees as the backups. And we will have, this is a very basic demo, so it's easier to follow because we will have only one demo tenant. No, did you start already? Yeah, no, no, nope, nope. Where's my control seat button? Yeah, exactly. Okay, the demo tenant will have only one volume, but it will be induced. It will have already one automatic backup done. And the admin will have one volume, as well, this one will be available. And it will also have one backup of that volume. So, here we run the list. We see that we have one volume induced. There was supposed to be some call outs showing where everything was, but okay. Now, this is the administrator view, you can see it there. And he also has one available volume, but he has two backups because he's seen the tenant's backup since he did it himself, even though it's the tenant. Okay. Now, we are going to fire the script to create backups for all the tenants. We are going to keep only one backup store in the back end, and we are exporting the metadata back to an FSR directly. And we can see how it is for the on-life induced volume. It creates a snapshot, then a volume, then does the actual backup, then deletes the temporary volume and the snapshot. And it has to remove the old backup because we only want to keep one alive. So, it will replace the old one. For its own volume, it only has to do the backup. Okay. Now, we will see the list of the volumes and the backups. And eventually, okay. There we go. So, what we can see here is that the volume IDs are, okay, no, no, no. This is going to be difficult to show. Okay, just a second. Okay. The volume ID is reported by Cinder, reports the temporary backup, the temporary volume ID, while the script is smart enough to know that that was created through a temporary volume. So, it reflects the original volume ID. Now, this is, what is, okay, sorry. With the callouts, it was a lot easier to follow. Okay. The admin list, there it goes. Okay. I don't know why it keeps. Okay. Then what, okay, I'll just hit and I'll start talking. It will eventually get up. Okay. What we will do next is detach the volume for the demo tenant, delete it. And as an administrator, also delete the volume. And delete all the metadata from the database. So, it simulates like we are in a new environment. So, when we do the restore, we also import all the metadata back into Cinder. Okay, there we deleted the, using my SQL. And here we do an import of the metadata as well as the volumes. And they, then they are back there, all the volumes from, visible from the demo tenant and the administrator. And now Neil will continue with the second use case. So, the goal here is that whilst the Cinder backup API, again it's very primitive, but you can sort of put a reasonable amount of scripting around it. So, as an admin, you can back everything up. You can make it available to your users if they didn't decide to back things up. And you can restore things if you want to in a reasonably clean state. So, I mean, it's a fairly clunky way of doing things, but it's using what's, you know, Cinder backup, which is, it's been around for a while. It's kind of, you know, some debate around its robustness, but it is possible to sort of give you some comfort that all your images are backed up. This is obviously only specific to Cinder. So, what we want to cover is, well, okay, well, if you are using RBD to back both Cinder and Glance, and you're using it for Nova as well. So, you're not just backing up Cinder volumes, you're backing up everything. How do you do that? So, this is what we call the admin warehouse. So, again, there's only one OpenStack cluster. You've got one active OpenStack site inside A. Again, we're going to run two self clusters here, one inside A and B. So, you know, less of the ploy still here. But this is not really driving it through Cinder backup. This is kind of taking things at a slightly lower level here. And really, this is kind of closest to the sort of tape backup. This is really, again, pretty primitive, but it's effective. It gets the stuff backed up. And perhaps adds more onto the restoration of the data. So, here we're going to, this is where you essentially, you're just taking a MySQL dump of your Cinder and Glance databases. And you're using RBD export to ship the data out. So, again, if you're using RBD export repeatedly, so it's taking the incremental backups, then you're going to get, you know, that shouldn't take too long, depending on your size of your cluster. And if it's absolutely huge, and this probably isn't ideal. But if you've got a reasonably manageable number of volumes, and you're just synchronizing the dumps, the MySQL dumps with the RBD exports, then you've got your data in site B. It's all there sitting there. Again, it's kind of sort of like a tape, in a sense, you know, it's all backed up, kind of be slow perhaps to restore it. This is not about a speedy recovery, but it's about having a fairly safe and easy one, which you're controlling, not using open stack services, but really the lower level components. So, MySQL dump, if you don't know how to use that, then I think there's other sessions you might want to attend. But there are scripts, there's one here, which go through an entire pool within Ceph. So, if you've got separate pools for glance, and for Cinder or your images and volumes, you can run the script, or just take a dump of all of those repeatedly, and push them out to a second site. So, it's pretty low level. It won't handle sort of cleaning up lots of snapshots, it won't do grandfathering of images and so on. But, to restore the data, you essentially import the MySQL, the .sql text, and you reverse the streams on the RB export script and just point it from what you've already got inside A, and just point it back to, sorry, what you've got inside B, now push back to site A. So, pretty primitive, but it gets the job done. And so again, if you're trying to avoid a sleepless night, this is the most sort of low level, non-open stack way of taking a lot of the backups. So, to go through the next two final future use cases, I don't know if you're sure. All right, so, thank you Neil. So, we saw that there's a lot of things that can actually be done already today, even scripted, to save you the hustle, but there's also more front door approach, and there's work to be done, and with that we're gonna actually move to the failover side. So, unlike the other two topologies, these topologies are actually based of two open stack clusters, and two self clusters, right? The backup, unlike in the previous one that can be done from the tenant user, is an admin, right, the operator responsibility. We're still talking in this use case, active passive as before, but we are using low level tools to allow us to handle the backups. So, instead of using just MySQL export and dumps, or RBD experts, we're actually using MSQL replication, and here we're using RBD experts as well, and this is how it looks. So, we have two senders, two open stack clusters, we have SQL replication for the sender, we have MySQL replication for Glance, and we use RBD experts between the sides. Now, so the replication is, but not including the AJ pairs, right? But we're doing between the two nodes, and unlike active active configuration, and I think Neil touched upon it a bit, it's the consistency between the data and the database is not guaranteed, because I'm the admin, I need to control when to take the export, right? I need to actually align the consistency windows, and again, you can automate some of it, as we saw earlier, you can schedule cron job to do the work, but it's still admin work to make it happen. So, that's actually already two open stacks, two set clusters, we're getting to a real failover phase, but there's more to it, and that's where we wanna go. So, our golden disaster recovery option will be actually an open stack live disaster recovery to a disaster recovery site, and how are we gonna do it? Through the front door, instead of using tools and scripts, we can actually use the front door open stack that has notion of the fact it's being disaster recovery to another site, and today we do it, the way to do it is actually using the front door APIs. So, Cinder, as you know, already has Cinder applications studying ISOs, we are working on improvements, as we speak to the Cinder application. As Neil mentioned, we are working also on the RBD mirroring that's gonna make usage of the volume API replication of Cinder, and at last, we can use Glance replication. The caveat in Glance, we need to use the same FSIDs on the backup clusters to avoid misconsistency, and if you look, this is just, I'm including a link, by the way, the slides will be available for download, I'm including the barcode at the end. So, if you don't know Glance Replicator, it's pretty much a cool tool, it allows you to do Glance push images, copies to the other side, and that can be leveraged. It's not built in as API, but as a separate tool, but it pretty much does the same job, it allows us to push the images to the other side, which is a better way than just doing the exports. And this is what's coming up in Liberty. So, I mentioned that volume replication has been here since Icehouse. However, we are trying to get more and more into volume replication, one of the things is actually aligning with consistency groups. So, the worst progress in the kilo release on consistency groups, but we still don't have the connection between volume replication and consistency groups, so we need to put both of them together, and the fact that we still have replication within the same sender implementation, it's not yet between sender. So, volume replication, Victor, there's actually design sessions this week on this topic, so if you're interested in it, please attend, and this will actually solve the problem of data replication between two single deployments, and with the CG replication, it will allow us actually to synchronize the different volume types as well, what needs to be captured within the replication, because as you know, we are serving application, the application workloads maybe spread on several volumes, including database, logs, et cetera, we need to capture them in consistency state. So, if you connected what I said earlier about the pain of the admin, just to do the consistency checkpoints, here it's pretty much will be built in into the API. And of course, we'll miss scheduling, et cetera, that will come next, but as you know, in OpenStack, we take around our approach, and I think that's the way we're going. And with that, I wanna hand it back to Neil to summarize. As you can see, multi-site backup is not a simple thing, and it's fair to say it wasn't designed into the core of OpenStack at the very beginning here. If you put all this to one side, there's a sort of philosophical thing here, which is if you wanna really do proper multi-site, really you take a cloud approach, which is you push this up to the application, and you say, look, it's up to the application to pick between multiple clouds, and ensure that the data is stored in multiple locations, and the application is handling the failover. This is kind of very much the public cloud, Amazon-style way of doing things here, but recognizing in the private cloud, you've got admins who have eyes on them, and they need to sort of ensure that it's backups. You have to do something right now, and this is kind of what these use cases are designed to do. So just to review here, you've really got a choice between making this a user-driven option, where you're using sender backup, which you can then optionally use scripts like the one at Gorka's developing here to sort of make it an admin-driven workflow if you don't trust your users to be doing backups. If you don't wanna just make this focus on sender, you want to expand this out to cover all of the storage that's perhaps backed by something, SepRBD or you're storing in general with your images and so on, you've got to choose. Do you really wanna go for just let me get it backed up and gonna take me a while to restore it, but at least I know it's there in some state. Or use case number three, which is I will run Active Passive, I'll accept some caveats, which is there might be some inconsistencies which I have to clear up manually, but at least I know I can fail over to a site B in some format. So you have to pretty pick between a medium and what we call the advanced use case right now. But as you can see the goal here, which we're working on both on the Sep and the OpenStack site is to really get to a true active failover site configuration and the work has all been done there, it's just gonna take a time for it to all mesh, but hopefully this has given you some ideas and in a few minutes we've got left, if other people are running DR in different ways, I'd be interested to hear your options and how you're implementing it, but otherwise thank you very much for your time. And there, thank you. Thank you, and with that we're gonna open it actually for a short Q and A. There's, as I mentioned, there's a bar cut to the slides with the relevant links in there. And if you have a question, please use the mic, yeah. In the case of the passive active, I mean active passive topology, the amount of nodes in both sites has to be the same or in the B site can be, I don't know, one single node or something like that? Yeah, I mean, no, I mean, it can be different. Certainly on the left side, you can choose different policies around, you might want to have fewer replicas, so you have fewer nodes because you don't want to have the same level of data integrity. Certainly you might want to compress several of the services and co-locate them, recognizing that site B is not gonna have the same kind of latency and performance characteristics of site A. So now you can definitely save money by compressing things in site B. It depends what you're trying to offer your users. But as you're just copying the backup and the underlying topology that's supporting those backups can definitely be different. I have a question. How is RBD mirroring different than having a crush map that puts like two or three replicas to a different data center, to the backup side? Yeah, so if you're doing things at the RADOS level, RADOS is a strongly consistent storage system, right? So when you're doing a write, it needs to get a successful act back from each copy that's been written. So if you're doing that over sites with high latency, that can take a long time and applications are potentially gonna be very sensitive to having that high latency. So yes, I mean, if you do what we call split site RADOS, you can in theory do it, but it can definitely have an impact in your performance which is expecting and wants a very quick response back. This is slightly different with mirroring is where it's not at the RADOS level, it's at the RBD level. So you're worried about copying the entire image, not just the RADOS objects underneath it. So you'll have a fully formed copy of the entire image in both sites. But as kind of relates back to the earlier question, you may want to have them differently stored. You might have say a three X replica on site A, but you may only want a two X replica for the site B. So right now, doing that stretched RADOS S, you could do it in a campus environments, which will give you a similar kind of thing here. But really most people aren't doing backups in a campus. They're doing them geographically and at that wide geographies and at that point, you don't really want to be spreading your crush map that far. Okay, thank you. Next question for Sean. Hi, two parts. One is some of the basic considerations of Cinder replication, all these things. Some of them would be again applicable for Manila. And Manila is big in liberty. So are we thinking of some of the things at least on the replication backup shares for Manila now that we kind of have a control on Cinder? So, thank you for raising the question. I was almost at the top of the tongue today this morning when we had the Manila session to bring the disaster recovery, also in that scope. But yes, the overall, yes, it's in the scope and it's part of, as soon as you put your eggs in the open stack basket, you start to have those concerns. As I said earlier, I believe the work will be incremental because there are bigger, fresh defects and there is delta right now between Cinder and Manila. Manila still has a way to go in terms of implementing replication, which is not there yet, consistency groups, which is not there yet. So there is a parity yet to make it. The other smaller question is, now that we are talking WAN and other issues, are we thinking of compression-enabled extent awareness or over WAN, all those things which we had in traditional issues, compression-enabled application? I mean, it's more about on the storage side of things. I mean, certainly that's, it's not part of your initial design for RBD, unless Josh wants to tell me differently here. But yes, I mean, you, this is where having incremental backups and your current architecture gives you some benefit there. But yeah, I mean, it's something we'll have to look at, but it's not part of the original scope. You know, that compression, it's about sort of getting as eventually consistent window down as close as possible. But I think that's an optimization once we've got like the basic architecture in place. And again, it depends on the back-end implementation, right? Some of the back-ends does come with the compression build. Would the demo, is that also available at the site? The demo you had, you know? It should be, yes. Yeah, it is. There's a link to the full demo with explanations, et cetera, that didn't come through. Yeah, because you do it offline. I want more questions. So with data backup, we are pretty much clear with volumes, with images. But in a situation when I need to restore or if I want to have a DR site, what is the approach for backing up and for backing up networking and launching virtual machines into the same networks, into the same floating IPs, into the same load balancer groups? It was outside of the scope of this talk and it's a very good question. So you are invited to my next talks on this topic. But again, as I said at the beginning, disaster recovery is a larger umbrella than just the block storage because there's networking, there's compute, there's other, there's as I mentioned, heat as part of the orchestration needs to be take place. So this is just, today we try to tackle just the storage side, but yes, as soon as you're already dealing with disaster recovery, that's pretty much the next door. And again, Cinder I think right now has the only a founder API to maintain the backup with the metadata, but there are other ways to do it as well in network and compute, et cetera. So it's not fully active, active as you know. But as I said, there's incremental stuff that can be done today, just like we showed that there's things you can do already today. Okay. When is the next session? He's had three sessions already today. He's kind of worn out. He's not thinking about tomorrow. Open the schedule, yeah. Yeah, yeah. So I think we have the road to enterprise storage session on Wednesday. Yeah. Forferty, thank you. Thank you. Thank you.