 Hello, everyone. I am Michelle De Palma, and this is the Data Services Office Hour. I'm here with Chris Blum, and today's topic is... I'm just Data Foundation for the date, and we're going to talk about the features that you can get nowadays with the current version of Object Data Foundation or in short, ODF. Okay. Can you give me an overview of some of the features you want to talk about? What are we going to start with? Where are we going? So, for the features, we usually divide them into four categories. So, one, functionality, two, security, three, performance, and four, efficiency. We have features for all these categories that will help you manage your data needs in your OpenShift cluster. Okay. So, are we focusing strictly on the RookSef side of things? Are we including the MCG Nuba side? Do you want to just make orient users as to what part we're talking about? I assume this is RookSef and features that are now available. So, ODF contains both. ODF contains what is known as RookSef in the upstream, and it also contains what is upstream known as Nuba, and in our product we call it a multi-cloud gateway, because it allows you to connect multiple backend cloud provider into one gateway where you can funnel all your data through. And the features we're going to talk about today will span both products, because ODF combines them together and presents them to you as a single access to your data. Fantastic. Okay. All right, so give me your first feature. What's the first thing you want to explore? The first thing that's quite interesting for a lot of people is our addition of compact mode. So, with ODF you're now able to deploy in a compact cluster, which means a three-node cluster that will contain everything. So, this is in front of an OpenShift site. It will contain the control planes and the workers in the same box. And if your cluster is beefy enough, you can put ODF on there and have the same ability that you would enjoy in a larger cluster, also in this three-node cluster. And are any restrictions in that compact cluster for pretty much everything is available? Pretty much everything is available. We even got the resource requirements down a little bit. So, it will consume a little bit less memory under normal circumstances. And so, we were able to relax these requirements a little bit so that, yeah, you don't spend that many resources just on infrastructure. And you have more resources available for your actual workloads that you want to put on these three-node clusters. Okay, and just for clarity, this is not GA. This is not Tech Preview or Dev Preview. It's GA, compact cluster. This is GA. Music. Checking. And it's great. We have a couple of people that want to put this in remote locations. And they like the idea of doing that. And we have a very interesting project of the first project, which you can see on first up there that is putting that on HP Hotware. So, it self-contained, it's like a mini play system. That's everything in it. And you can carry it around. Like in, yeah, you could take it in an airplane. Wow, truly edge. Okay, fantastic. That's really cool. I like it. Maybe we'll get a demo of that in a future show. That would be nice. Something, I mean, it's gonna look like ODF. You're really not gonna be able to tell, but we'll show you how something can be. Just how edge can we get. Fantastic, okay. It's very impressive. I got the chance to play around with it, do some performance tests on it so we can talk about numbers in some future show. But the real impressiveness of this comes when you see it in person because it's a small box. It doesn't look too beefy. But nevertheless, you can put a bunch of NVIDIA niece in there and really get a lot of speed. That has a built-in dual 10 gig switch. And you can even put some NVIDIA graphic cards in there for AIML. Fantastic. That's really cool. All right. And this is really great. You've got your compute right at the edge. You've got your storage right at the edge. And then you've got all the other features that we're probably gonna talk about that allow you to transfer data back and forth to maybe a main cluster or several main clusters that are not edge. So it completes the picture, I would assume, right? Of what an enterprise customer might want to do. They may wanna put some things way out there at the edge, like in their car and then be able to switch data back and forth. Okay, fantastic. Anything else you wanna say about that particular feature about being compact? No, that's pretty much it. We try, that's one of our big goals if we try to make it easy. And we don't want it to be any complex thing that you need to learn. So we put a lot of effort in so that it feels and behaves just like a big install. So you don't have to relearn things. You don't have to rethink things or think, oh, can I do this in this environment or that? It's just the same thing. So it sounds like you wouldn't even know that it's compact unless you actually checked to see that your worker nodes are collapsed on your control plane. Otherwise it feels the same, operates the same, et cetera, et cetera. That's great. Yeah, that's a nice summary. Yeah, you wouldn't really know. Yeah. Awesome. Cool. Okay, you got another feature for me? Let's talk about the next one. How, what would you say if I would tell you that we finally fully tested VMware API? So I know we have had a couple of people that used VMware API with OpenShift and also with ODF previously, but now we've fully gone through the whole QE cycle. So that we can say this all works well and we fixed a few minor bugs, but it's fully tested now. We know it works nice and you can use your machine set, your machine configs with that. And I've been using that a lot and it works really good. That's great. Because I find that especially coming from a more Kubernetes standard background and then into OpenShift, I found, I love machine sets and then the machine set API. I was like, this is great, yay. Like this is so easy. So this has been fully tested. Do you, are you running this in one of our labs? Is that what are you finding? Are you testing around home? Are you working with a customer that I'm sure you can't talk about, but like where have you been testing and what's the size of the cluster? We're talking standard sizes, like three masters, seven masters, how large control planes, like, you know, like did you get to push it to extremes or you just said, oh, this is, let me roll this into my day-to-day operations. Where have you been testing? I know I've probably been using it and didn't notice it. So your control planes are normal size, three only? So as you know, our team, we do have a bunch of hotware and some of it do have VMware cluster and we have this automatic provisioning of those and I usually rent this out to different people. Like you once had a cluster and we sometimes give it out to engineering, QE, and they all do some work on it. And the important thing here is, while I'm administrating it, I don't necessarily want everyone to have a vCenter access. So I don't want to give out all the options there, but oftentimes, especially with such a diverse group, their usage patterns and requirements differ. Some need a small ODF cluster, some need a large ODF cluster and they all need to define the number of nodes that they add to the cluster by themselves. And with IPI and the machine sets, we can actually offer that. And then it works nice because I just hand them this small cluster, bare minimum default install and I tell them, please create a machine set. You want to install ODF, create a machine set, you get the right side. Nowadays, I even have a script. So I give them a script and it's all that configures the machine set. But then if they need more nodes, it just scale up the machine set. And that works very, very well and saves me a lot of work because when they say I need more RAM, more CPU, it's just a config within the OpenShift cluster. And it saves me a lot of work and a lot of guesswork on their part as well. Previously, we had UPI where during the installation, I would create the VMs and then they would be connected to the OpenShift cluster. But that contains a lot of inflexibility. People can't, they get one cluster size and then they're kind of stuck with it. And if they need more nodes or different nodes, I have to take action. So I'm really happy about the IPI in this case because it saves me a lot of work. That's great. Okay, so just to recap that feature, the IPI saves the administrator a lot of work. It hides all of the controls, the visaers controls from people, you don't have to give them access. And it just sounds like it allows you to actually use machine set API properly, the way it's intended to be used, which is people can grow it and do and change config stuff. So awesome, that sounds awesome. I like it, good stuff. And the other thing is from now the OpenShift user side, if something happens in the backend and let's say in hypervisor crashes and their VM is lost, then OpenShift can take action to re-provision a new node. And they don't need to care about the backend. They don't need to care about the features. They don't need to learn the VMware vocabulary. They just care about their OpenShift cluster and it looks and behaves very similar in between all the different deployment platforms that we have. And that's the important thing in OpenShift. And ODF on top here provides this homogeneous interface for your data. Awesome, wow, all right. Okay, do you wanna say anything more about that feature? You ready for the next feature? That's my deal. You have any more questions? I mean, it's interesting. I feel like that's gonna be a future show, like watching it in action, just to see if there's something more we wanna show people there. Like I really, I would like to drive home the point exactly how flexible this is, how it makes your administrator's life better and how we do it. But I don't, depends on timing and stuff like that. So I was thinking anything else we wanna tell people about in that particular feature, that might just be a really good thing to show. They're like, okay, this is how this is done. And this is the benefit it gives you in action. Yeah, we could do that. I mean, when you watch this, just send us in a chat if you wanna see that or if you have any other cool ideas. We do have a couple of ideas and a pipeline. I definitely wanna do another quiz because that was a lot of fun. Yeah, because they're fun. But one of our suggestions about deep dives, like we don't, I mean, we don't have to spend a whole hour on it, but we can, if someone has a deep dive that they would like to see, we can do that. And then, you know, just, and take questions while we're doing it, that kind of thing. But I think this would be a good thing to show is it's actually really important. And I, and it may be one of those things that people don't really understand until they see. I'll go, wow, I'm not doing this yet. I should be doing this. And how do I go about it? And what do I have to think about? And that kind of stuff. So I'm in, I'll put it on the topic list. All right, cool. All right, what's next? So we're still in this functional and tea chapter features that are available in Fortinet. And one of the advanced features that is a little bit difficult to describe is pool management. And why is it difficult to explain? Because pool management is a feature that comes from Seth. And if you want to explain it, then you also have to explain a lot of Seth stuff. But let's try to start with a use case. So let's say you have a large installation and you have different parts of your organization using the same OpenShift cluster. It happens. So you have different people. They trust each other, but they work on the same OpenShift cluster. And now they both have search intensive workloads. Now what, with a default installation, what would happen is you would see that they would compete for resources and potentially drain each other from these resources. And so they would slow each other down. So what we can do in a Seth layer is we can divide them and we can say, okay, you get distinct hosts, storage nodes. So it's still the same overall storage cluster, but you can assign dedicated nodes to dedicated people or workloads. And then these can be used by workload A or workload B and they don't compete for the search resources anymore. So is that a pool when each grouping is a pool? Yes, you could say so. Okay, just to, all right. So all right, questions. Is this something you have to know about ahead of time or can you do it after you realize that they're competing for resources and slow each other down? Is it a design decision or is it like an operational decision that can happen later? You don't have to know it during the installation of ODF. You can divide the nodes afterwards into these distinct pools, but it would be extra effort to port the existing PV, persistent volumes over to these pools. So you would have to migrate these persistent volumes onto your pool afterwards. It's not impossible, but it's migrating the data first. But yeah, it can be done online mostly. Okay, so then as a design decision upfront, let's say you know you're gonna have at least a couple of teams that are gonna need pools, but then there's everybody else. So could you do something like that where you install, you create a couple of pools for these teams you already know you have, and then you've got this kind of catch-all where everyone in that, the catch-all group competes with each other. Could you do something to that, just to prepare? Like if you wanted to have a few pools reserved just to make sure that way you're avoiding migrating later. You can do that. You can create the pools and you can say all the pools use all the nodes to begin with. And then later on you can change your algorithm and then say, okay, but that pool is now restricted to those nodes. And it's also possible that you can say everyone uses all nodes, but that workload is only using these nodes, which are also shared with the other pools. But does that still make sense? Yeah, yeah, yeah, yeah, no, so that's, it gives you a lot of flexibility actually. So has anything changed about the default install with pools or it's, okay, everything's, everybody's everywhere, although it's just one. Like I said, it's a very advanced feature. We wanna support our advanced users with that, that really wanna take out the most from their ODF installation. And this is definitely not something you usually think about during the install or probably in your first year. But then this proves how well CEP is prepared for the overall lifetime of the OpenShift cluster. You are there and we have added so many features for OpenStack, which is also usually a large installation. And we are able to provide most features there that we've got to know from the OpenStack world in OpenShift now. So it's not like, oh yeah, does special use case and let's come back to you into years when engineering had time to develop it. It's usually there and somehow accessible. And this is one of those features. Okay, and this is GA. This is GA, yes. Okay, just checking just for those who are listening. This is not GA and what we've talked about. And the other cool thing is you can just construct nodes a little bit differently. So as you know, ODF internal, that's all on Flash. So we're talking SSDs and NVMeas, but you can have different nodes. You can say I have upgraded my hardware platform and that new hardware platform can suddenly support NVMeas. So if you had your whole cluster installed on SSDs and now you have those new nodes with NVMeas, if you just push those NVMeas nodes in a cluster, then the SSD nodes will obviously limit the performance of the cluster. It will increase obviously a little bit, but if you can completely isolate that and define how you use your NVMeas, you can leverage the power of the NVMeas in this heterogeneous cluster a lot better. For example, you could say, always read from the NVMeas. Wow, that's great. Wow, okay. So you can have your kind of high performing nodes and or I know that we, with stuff we're always talking about like kind of the holistic view and we're, you know, come out to this hardware. But this isn't, you have the opportunity to kind of specialize a little bit here if you wanted to, right? So you could say this pull over here has all of the high performing stuff and then these things are gonna use it and whatever. So that's kind of, you have the option to do that if you want to, that's fantastic. Wow, okay. I did not know that. Actually, this is new to me too. And is our pools or is this, so this is considered an advanced feature? Like someone, there's, I don't, okay. All right, I can, is it in the console? Is it advanced as in you're gonna go down and go below the scenes and try and, or is it actually integrated in the console yet? I just can't remember if I've seen something about pools. Is it one of the options in? No, it's not. As far as I know, it's not in the UI yet. Like I said, advanced feature, we don't wanna bother regular users with that because it's something usually you don't need to, but in the future, when you're really big, you need to think about also heterogeneous hardware. That's usually where this is interesting. You've run your cluster for a while and now you're adding new generations of hardware while the old one is slowly running out. But it's not like a switch like this overnight, you slowly have the new hardware everywhere. It's usually data center room after data center room, you plug in new hardware and then you wanna gradually switch over. And while even without the feature, you can do that with the feature, you can still leverage your old notes perfectly fine and gradually move over. Okay. So I wanted to mention the UI and just because if someone goes in and they have for it and they're like, well, where is it? It's not here because it's advanced. Have a look at the documentation. It is there. It's just not presented to you in UI because it's considered advanced. Awesome. Okay. Fantastic. Okay. Anything else you wanna point out about that? About pools? Is that a potential show? Potential topic? Pools? I don't think so. We do it. Let's do it. Okay. So are we still in functionality? Do you, are we ready for- We're still in functionality, we're not on yet. I mean, this feels like one of these commercial shows where we're like, oh, look at that. How great is that? Sorry. Please call now. 1-800-NO. So it's about time that we talk a little bit about the future. So the future that we put into ODF is often marked as your tech preview or developer preview. Tech preview hot means that we have limited testing from QE. It's a new feature. It's not yet ready for runtime, but it's generally working. We know it's generally working. We're just not how to present sure. And that's how we introduce new features. Developer preview on the other hand means there might be one or two people in a company that have tried it and they said it works. But it's a very interesting feature and customers have asked us about it. So we wanna provide it, but there's usually no documentation at all. And for tech and developer preview, there's not gonna be any commercial support. So it's more about naming those features that might or might not be supported eventually. And then we try to collect feedback from it. So you try it out, you give us feedback. Right. This is a use at your own risk. Right. It's a use as a, yeah. And it's like open source. Yeah, exactly. Right, use at your own risk. Okay. Okay. And we want to get feedback so that we can make a really great product out of that. So we would provide those features that solve a real problem. And you had a problem, you wanna try it out and give us feedback. Please don't do it in your production cluster, though. But, so we do have a couple of great features in the functionality area. And the one that a lot of people asked is replica two. replica two is a developer feature. So you won't find any documentation about it right now. But it allows you to reduce the storage usage of your data in ODF. And that is three, correct? Standard is three, yes. So we're going down to two. Okay. So, sorry, you were saying, I was just gonna ask you like, when would one wanna use this other than two? Yeah. Is it about just conserving resources? Like you're in a tight situation and you just can't do replica three because you don't have enough to do it. Or are there other times when you specifically want replica? So they say replica three. Sorry, you want replica two. Like when, what are our use cases for such a thing? So you would use replica two when you are very sure that either you're fine with losing the data or having temporary read-only access to your data. Because when one of your replicas will be unavailable due to the notice down around not connected to a cluster, then the cluster would go into read-only mode. And the other way is you are based on already a storage system that provides you replication. For example, in my VMware environment, underneath of the OpenShift clusters is vSEN. Just because VMware doesn't provide any other way for me to easily have storage that is available across the hypervisors. So I'm using vSEN for that. And vSEN already has replication there. Right. And so I could say, okay, I don't need the replication on top. I already have replication down there. Perfect. Okay. That makes sense. All right. Keep talking. I had a question, but it'll come back to me. You know what happened? Okay. Go ahead. Any... So that's what replica two is for. As I said already, it's developer preview. That means we're very early in that. We wanna get some feedback. Obviously there is this downside that your data might become read-only when the nodes are down. But generally that works well. And we don't think you're actually gonna lose data, but we wanna get some more input in that. Okay. So just for clarity, this is an ODF and Open Data Foundation cluster-wide decision. You can't do this per pull, for instance. You couldn't have one pull. You just say, just use these two. How complicated can you? Okay. So all right. So that's interesting. That's it. That was my question. I just wanted to understand. Well, I wanted to understand, like I'm trying to give people a sense of like, all right, these are the things you have to think about before you go to install. These are the things that you don't have to think about that you can do later on. So if you wanted to create a pull later and you had a selection of whatever, your only two nodes at that time, you could make that pull replica two. It's flexible. It's not an entire cluster-wide decision. Okay. That's all. Sometimes it's good to be clear. And this is dev preview. Dev. Not tech. I don't know where we are in the back. Dev preview. Okay. You won't find a documentation there, but if you're interested in a talk to us and we'll can provide the necessary documents. Okay. Okay. Great. Awesome. Wow. And then there are two- A lot of functionality. Okay. A lot of functionality features here. Go on. Go ahead. You were gonna say in the two. There are two more areas that are quite exciting for me. And the first one is, a lot of people have come to us and they said, well, we wanna divide the storage traffic, network traffic and the application traffic because we have seen that if you're NVMe-based and you want to push your storage to the limit, you can with NVMe's and then the network becomes bottleneck because the NVMe's are faster than your network sometimes. And this usually see that with 10-gig networking, sometimes even with 25-gig networking, it depends a little bit on the nodes, but it's possible that you can set range such a link. Now, previously, you were always limited in OpenShift so that everything was using the same mix and the storage was using the OpenShift overlay network and was obviously also using the same mix as your application. So when you were really stressing your storage, what you were also stressing was your network and then sometimes people couldn't properly reach their applications anymore or even the LCD of the control planes, they got a reelection because they thought the control plane wasn't accessible anymore in a three-node cluster. So you need to be very careful there and what we have provided is now multisupport. The multis is an advanced network overlay and what you can do is you have CRS now that define networks and you can have private networks that actually use dedicated NICs. And that way, what's now tech-proof view is that you can tell ODF during the installation, it's even in the UI, that it should use a dedicated private multis network and then it will go over, for example, these NICs or a bond or a team or whatever you wanna use and it won't go over the other NICs that your applications use. So this is a feature that actually covers two things. You can also do, we're talking about performance here, right? Segregating off our storage traffic to this particular NIC using multis but it also covers the security aspect of it as well. So normally, that's what I'm more familiar with using multis in that way, like, okay, this traffic cannot go over what's being used by your users, let's go peel it off and put it on a different subnet or something like that. So that was Dev Preview and now it's Tech Preview, it's being promoted through the ranks, is that where we are? Yeah, so we have proper documentation in it, it's even, as I said, it's even in the UI. We found an edge case where if nodes go down, sometimes we have a little bot that prevents it to probably come back up again, which we couldn't fix in for that eight right there. So it's still in Tech Preview where, yes, it's not like it would lose data, it was more like losing accessibility, the ability to mount the PVs on the bot. But we had to work around, but that's why it's not GA, we know of that issue and we'll fix it and then we'll GA it once it's super safe and you can all use it. Okay, so question, any upfront design considerations when you know you're going to use Maltes, anything that you would have to think about ahead of time as you go at the, when you're installing? Other than, I mean, I know it's part of the install, but is there something that? What makes it super easy is if you have a homogeneous hardware, so if you can always say, Eno one is my storage Nick or Eno two, then it makes it super easy. If you don't have that, what you can always do is use track it and rename your Nicks. That's super advanced, but you can do that, we are the machine conflicts and then apply that to your different machine types. But if you are in VMware or if you are in a public cloud, that should never be an issue really. An issue, okay, all right, fantastic. Wow, awesome. Okay, more in functionality. More in functionality. Oh my goodness, this is great actually, okay, so what's, it's been over a half an hour where we haven't left the first, but the others don't have so many features. Okay, okay, go ahead. So last category, which is also quite interesting is DR. DR has been something we have talked a lot this year and we are quite proud that we are very close to actually having it finished and having not only just the features, I mean, I don't just like to talk about the features, I wanna talk about problems and use cases and the problems that people face is different. Some people I talk to some customers, they tell me, oh, we have two data centers, we wanna sync it. And then in these conversations, we usually notice exactly what their problem is. And some, they do need synchronous replication so that what is written in location A needs to be present in location B immediately. And for that, you will always need three locations because you need to have a quorum, but we are able to have that and we call it a Metro DR because for these synchronous storage costs, they need to be very close together in the so-called Metro area. What we can do is you have location A and location B which are your data centers and then you can have a location three which is your quorum though. No data there, but it just observes are both locations visible or is one down and then it takes action based on that. And for that, you just need a very small installation. And for some Metro areas, you could even place the third location in a public cloud if you can provide the network connectivity. And the Metro DR is now tech preview. And the other thing, what we call regional DR means we have as from Chronos connectivity. And in there, it's enough if you have two data centers. And we've had conversations mostly in APEC where people say, well, we don't have a third site but we need to have this data replicated to the other side so we can feel open. We don't need to like get the whole data immediately over. We just need to have it there, let's say every five to 15 minutes, that's enough. Like if we lose five minutes of data, it's not the end of the world. We just need to make sure we have all the other data and we can come back online as quickly as possible and care about the other data eventually. But we need to come back online as quickly as possible. We don't have a third site. So that's where we have regional DR. And regional DR is something we're working on. It's currently deaf preview. So there's no documentation for that. We do have internal documentation if you wanna try out, reach out. And we're always happy if you provide any feedback to both the DR solutions. Eventually, we do plan to make it super easy to deploy this using ACM. So our advanced cluster manager where you'll have a strategic overview of all your OpenShift clusters. You can even deploy OpenShift clusters using ACM. And then you will have the capabilities of seeing multiple OpenShift clusters. And then you can just easily select them and say, hey, please set up the DR. Okay, so questions, just to clarify in people's minds. So tech preview is Metro DR's tech preview, right? So in terms of, if you're a customer thinking about this when you need near as instantaneous as it can be synchronous replication between sites, disaster recovery stuff, then you're talking Metro. And then because it has to be near each other so that it's not, you don't have such a lag. You cannot tolerate the lag. And in that case, you have to have a third arbiter. You have to have three deciding, not just your primary, your failover, and then another site that decides what's going on and kind of makes you a quorum. In the case of regional, you can tolerate a lag and therefore you just need the two of them and you've clearly got a primary and you've got a failover and so on. And that's dev preview. And in the future, ACM will be able to give you some sort of visibility into all of this. You can say, here are my Metro DRs. Here's my regional one because I can tolerate a much greater lag on this in terms of data synchronization and so on. Is that how someone should think about it? It's like, okay, this has got to be synchronous. I'm definitely going, I have to go Metro DR as opposed to this stuff can be five minutes later. It's going to be regional. And I realized that Metro DR, actually, there are physical limitations, right? Because you want it to be synchronous, but you could still do regional DR, even if it depends on what your requirement is, right? How, if you're synchronous or not, and how fast it has to be there. Well, it's never synchronous. It's never, right. For regional DR. Right, of course, right. We need a new term for that. The good thing is the higher the bandwidth and lower the latency between the two clusters in regional DR, the shorter you can have the differential amount. So regional DR works across all latencies. It will eventually synchronize the data over. So you can synchronize something from India to the US or from the US to New Zealand or something like that. There's a huge distances. But if you have it much closer, let's say US, East and West Coast, then the time it takes to sync those two locations will be smaller so that the potential amount of data that you lose in an incident is smaller. So just to give people something to think about, when thinking about this feature, so we are strictly talking about the ODF layer, the Open Data Foundation layer. When we synchronize on the other side, whether it's asynchronous or not, we're copying over metadata. There's all of the... What is the upstream operator name? Is it Valero? Am I getting this confused? Is that correct? Give me an OADP. Thank you. Yes, OADP. So I'm just saying, so if you're a customer and you're looking into this and let's say you wanna do regional DR, at this point, I'm just trying to think of the design considerations upfront. So without ACM at this time, and you just needed to make sure that the data that you have in ODF has to get to another site, the dev preview of regional DR at this time would do that for you. Are we there? And then you would have to worry about bringing the rest of the cluster over and failing over. Is this, like how big does this grow? I know when I've looked at it, we're just talking about like, okay, we're gonna get these TVs over and this is gonna synchronize and it's nice. But when is there, how does it fit into the larger? I guess that's what ACM's job is. Is that the piece that ACM provides later on? Like, okay. All right, I'm just trying to fit it in. Yeah, like, okay. We're doing the ODF side and then as it grows into ACM, you're gonna see more of like fail this entire cluster over. The data's already been synchronized. It's a Metro DR. This is how this works, that kind of thing. Okay. All right, just trying to figure it out. Like, go ahead. I'm repeating myself. Our goal with ODF is we wanna keep it simple. We wanna, we don't wanna push into training and make you read all the documentation. We wanna keep it simple. And what we found out is DR in a lot of storage products, it's hard. It's, you have to think about a lot of things and they don't need to be because all these things you can usually test for and sometimes you don't even know the answer. Like, do you know the latency between all your data centers? You probably don't. And it's not that important. You just know that you have two data centers and let the computer figure out all the rest. And in that, but to achieve that goal of making DR easy, you unfortunately, you have to take a step away. You have to take, you have to look at an overview of all your clusters. And this is where ACM is placed, right? You have the strategic overview of multiple open shift clusters. And from this overview, you look down and you say, okay, let's take this cluster and it's connected. And we want to provide exactly that user experience. Yeah, this cluster and you say, now, Lincoln. And we're not there yet, but we were going there and that's why we're providing this with a Metro DRS tech preview and regional DRS depth preview. Awesome. That's fantastic. Okay. Is there more in functionality? It's been 45 minutes. So is that, do we close that functionality? That's already it, Michelle. It's already it. Okay. So do you want to talk about the next groups? You said there weren't quite as much in some of the other categories or do they deserve their own show? What do you think? That there is a little bit more and let's go through this a little quicker because sometimes a functionality, it's just very technical and we want to also explain why we do the technical stuff. But for the next category, security, that's quite easy. People need security. And for in fact, a lot of people choose Red Hat because of its security. We were able to show that Red Hat has really great security. And with our FIP certification, for example, we do provide a good foundation. And what we've added in 4.7 was you were able to not only encrypt the whole cluster, but you were also able to encrypt each single RBDO, like each single POC device. And you were able to use your early existing vault instance or key management system with that and that would provide a key. And with that key, the POC device would be... Hang on. I'm not here. Okay, I'm going to ask Chris to rejoin just for a second because I'm hanging on a second. We're coming back. And there he is. And here he is. And voila. Hello. Hello. Okay. And it sounds a little bit like I've talked too much, but... But maybe it was Kate, like just shut up first. Sorry, guys. Anyways. Okay, you're talking about security. Okay. Earlier, we were able to encrypt RBDs one by one and now we are able to encrypt each RBD and it snaps and close. And the encryption key comes from your key management system like Hashtag of Vault, for example. The other thing is we... In a category of performance, we've added data secreditation as a developer preview. So that means you can pin certain disks and certain nodes and segregate your workers on different disks on the nodes, specifically like choosing only HDDs or SSDs for these. Okay. I'm not asking questions in the interest of time. I don't have them. I'm just like... And in the fourth category, efficiency, we actually have three more things. So we have MCG caching. And I think Michelle already did a call about that. As a developer preview, we now have plug-only deployment so that you can reduce the deployment size to only the bare minimum so that you save on CPU and memory. And as a GA thing, you now get part-specific IO metrics. So in your monitoring, you can scroll down to how much storage IO, each single part consumes if you're trying to catch who is draining your cluster out of the performance. Chris, we need to wrap up. No other way to put it. Okay, so we did functionality. We've done security. Can we pause and do the two other categories on our next office hour as a little intro, maybe right in the beginning before we launch and do a deeper dive on something? Okay. So you're in the house, you make the roll. Oh, so much power. I love it. Okay, all right. Anyway, I wanna thank everyone for joining and watching. Please put your comments in the chat. I think next time we're talking about doing a poll as well. Is that correct? A quick poll. Maybe you're in the house. I think so. I'm in for the polling. That's always really fun. Okay, thank you all. And I will definitely update the calendar, look at our shared calendar and see what's coming in the next open data services, data services, office hour. Thank you. Take care. Bye-bye. Bye, Chris.