 Hello, everyone. Welcome to the OpenShift Data Foundation Office Hour, and I'm really pleased to have a special guest today. Hopefully, he'll become a regular. Daniel Parks, he is a new member of the Open Data Foundation team. Is that correct? Yeah, that's correct. Hi, Michelle. Thanks for inviting me. So, yes, I have been in Red Hat, I think, almost five years now. I have done quite a lot of work in professional services, and just recently, as you mentioned, I changed role known to the hybrid platform BU and doing storage related stuff. Fantastic. Okay, so talk to me about what we are going to discuss today. Yeah, so because the release of ODF, no, OpenShift Data Foundation is getting really close, the release of the new version that is 4.10. Maybe if you think it's a good idea, we could just comment through some of the new cool features that are coming out with that release, and then once we go through them without going too much into detail, then we can jump into multi-cloud gateway, just a brief introduction to it, because it has already been covered, I think, in other shows with detail, but just a brief context of multi-cloud gateway, and then we can share a little bit one of the really nice features that is new in ODF 4.10 for multi-cloud gateway. Very nice. Okay, so what's new? What's new in ODF? Yeah, so in ODF, I think that in the next days, no, we are going to have the release available, general available. I think that what I would recommend, if people are interested, is once we are GA and the product has been released, I would recommend them to go into the Redcat official documentation page and go to the release notes, and there they will see all the new features in detail, not because there's a lot of cool stuff available, but I have just noted down some of the ones that I think that are more interesting, at least the ones that for me seemed more interesting, so we can just briefly go through them. So for me, my favorite one is related to data resiliency, and this is with regional DR, no, regional disaster recovery. I know that Annette has been in the show before, I think, and you have already spoken about regional DR, but this is really a synchronous replication between two sites, so we have to open shift and ODF clusters deployed in different sites, and we are able to replicate the data, the persistent volumes between both of the clusters. This is asynchronous replication, we have to take an account, and also I wanted just to mention that you need ACM, so the Advanced Cluster Manager is needed for the regional DR, in the sense that because we are doing application disaster recovery, we need to have like failover and failback, and we need someone outside of the application to manage this kind of thing, not failing over, failing back, and also doing the configuration of the synchronization between the persistent volumes that we are working with, so that's why ACM is included or is needed for regional DR. Another quick mention, at the moment, we're able to do block disaster recovery synchronization, I hope in the near future, we will also be able to do a shared file system with CFFS. And that uses OADP, so it's ACM, the OADP operator doing asynchronous regional, and regional is, forgive me, I can't quite, regional is fairly far apart, correct? Yes, that's the cool thing, because it's a synchronous replication, we are able to do it in a one, so latency is not an issue. When we have to worry about latency is when we are going into the other solution that we have that will come in the future, that is metro DR, where we are doing synchronous replication between the volumes, no there latency is a thing that we have to take in account, but the great thing about regional DR is a synchronous replication, so we can be far apart, our sites can be far apart, and it will still be able to replicate and work with without any issues. Okay, so in the new version, asynchronous regional is fully supported by Red Hat, it's part of the product, it's no longer tech preview, we've graduated from tech preview, awesome. Yes, yes, that's the thing, I think that in the first release, so the first day that ODF comes out in Forten is going to still be tech preview, but in a maintenance release, very soon it's going to go to GA, so it's just a question of time that we have it in Forten available for everyone to use. Nice, all right, maybe we'll redo a show then, that might be that much work, because it's a big topic and a lot of people need it, so it might be worth going through one more time and seeing what it looks like, awesome. Okay, so what else is new? Yeah, so we also have, this is interesting for people that have already deployed, or they're going to deploy OpenShift and ODF on AWS, we have increased the support that we have for different backing stores, so when you want to deploy ODF, you have to use certain PDs, and you're doing dynamic provisioning, you have to use for the backing store, let's say a storage class that you have on OCP, so we have included in the available backing store that we can use for deploying ODF, we have included GP3 and GP2 CSI drivers, and the really cool thing with GP3 is that this is our EBS volumes in AWS, and the cool thing is that GP3 is really cheaper, in almost all cases, it's going to be cheaper to consume, so it's going to reduce your volume build, no, in AWS, and at the same time it's going to offer you a fixed bandwidth and also IOPS is relevant of the size of the volume, because if you remember when we were using GP2, or when you use GP2 EBS volumes, you get certain performance dependent on the size of the volume that you're provisioning, so the bigger the volume, the best performance that you get. Yeah, okay, okay, I got it backwards. All right, so wait question, if you're, so you make that decision, the backing store is done when you install, right? You can't, you don't change that later, or can you? Like, so if I'm a customer and I have this on AWS now with, I have ODF 4.9, if I wanted to change my backing store, is it, I assume that would be a fresh install, or can I, is there an upgrade? I assume there is not really an upgrade, Catherine. Yeah, there is, there's something that we have to test, I haven't, I haven't tried this, but the thing is that AWS is offering, I'm not sure how it works, I think that may be something that they do in the background, but they're offering moving from GP2 to GP3 transparently without downtime, but on the AWS side, so from from our side, I think that we have to test if on a running cluster, as you mentioned, you're already using GP2, and you want to migrate to GP3, if you can do that transparent migration on the AWS side, and that would be great, but that's something that I think that we need to test from our side and see what, what AWS is doing, but initially, it seems possible, at least from what AWS is mentioning. Fantastic, I mean, it'd be worth trying out, maybe that'll be an interesting show, notice I'm coming up with ideas for new shows. Okay, so is there more that you were happy to see in ODF, or what else is new? Is that all you want, let me know. Yeah, we, I just wanted to, I will share my screen just a moment, and I can show you, let me just share the window, and we can share this window. Okay, so let me, yeah, so. I'm gonna make you a small, hang on a second, there we go. Okay. Yeah, there are a couple of things that I wanted to mention, just so we have the UI, and we can see it here, is that, as I mentioned, no, the storage class now when we are deploying ODF, so we have deployed our, or installed our operator, and we're going to create our, our storage system, we can now select GP3, no. And the other thing that I wanted to mention very quickly here is that we have also now this little box in the UI when we're doing deployments to taint nodes, no. So really what this means is, if I want to have dedicated worker nodes to ODF, I want to make sure that no other resources run on my ODF worker nodes, no. Before we could do that with CLI, and you had to do a couple of steps, now it's completely very easy to do, you just select the taint nodes, and with that you're making sure that no other application is going to run on your dedicated worker nodes. Now that is also quite a nice feature to make it easier for everyone to configure the taint nodes that we have. Perfect. Nice. Okay, so are we, are we talking about one product that I know a little bit, just a little bit about it? Multi-Cloud Gateway or? Yes, yes, that's right. So just to go through very briefly about this new feature, no, we have Multi-Cloud Gateway as you already know, it's like a Swiss knife in the sense that all the kind of crazy stuff that you can do with Multi-Cloud Gateway that is really great, no, what you can do with data, we are adding a new feature in ODF 410 by which we, that is called, no, like namespace on top of, on top of find system. So we are again working with bucket namespace that now we can explain a little bit where that is, but what we are making possible with this new feature is that if you have like a legacy application that is working with a shared file system, it's working with files, folders in the traditional way, and then we have like a cloud native application that we want to start using our S3 endpoint with the S3 API, we are now able using this new feature to through the S3 endpoint, a Multi-Cloud Gateway, we're able to access the data set, the legacy data set, and we're also able to write new folders and write new files, download, upload, no, so it's really easy sharing between like legacy applications and this new cloud ready applications, we could say that we want to use the S3 API, no, so that's really the new feature that is going to be available that is quite, quite cool. Fantastic. Okay, so is that, so when you say available it will be in tech preview in this version? I think that, I think there's GA, yeah, it's going to be, it's going to be GA, so it's going to be supported and we're going to be able to use it out of the box. That's fantastic. Okay, and I didn't, I didn't realize that, so do we, any chance you have a demo of that right now? Yeah, yes, yes, we can take, we can take a look at the demo. I just wanted to briefly mention in two or three minutes a little bit, or maybe you can help me with this because I know that you have covered this before but just so people have a small difference between the data buckets, no, that we have, or the different types of bucket that we have in Multi-Cloud Gateway that we could say that we have data buckets or object buckets like we wanted to call them and also namespaced buckets, no, and I just wanted to make that difference in, to have a little bit of context, no, in the sense that for the developer kind of experience, we have like a very consistent experience, he's always seeing this virtual bucket accessed through the F3 endpoint, but in the, in the, in the backend, no, we have many different options that, depending on our use case, no, that we can, that we can configure. One of these is like data buckets where we are going through the F3 endpoint and we are really, we want to take care of our data, no, we really want to have our data safe, our critical data, and then we are getting out of the box with data buckets like security, no, with encryption, address, and on transit. We're also getting data enhancement in the sense that we are getting deduplication, and we're also getting compression of the data, no, so that's all of cool stuff that we get with our data buckets out of the box, no, but the only thing, go ahead, go ahead. Data buckets are where you can choose if you want to do mirroring or spread and you have tiers, that was like the first thing that MCG is that, okay, just making sure I have my terminology straight, okay, all right, sorry, go ahead, okay, that's correct, so on one side you're getting all of these cool features out of the box and on the backend, as you said, you are doing whatever kind of policy you want to apply to the different backend stores that you have, no, so for example, you can do, as you mentioned, mirroring between on-premises and on-cloud, no, for example, the only thing that I wanted to mention there is that because we are doing encryption, we're doing all of these things to do the data enhancement, we have to access this data through the S3 endpoint, no, so we have to go through the multi-cloud gateway endpoint to be able to access that object, no, so if we want to share this data set with, let's say, groups or people that are working in other teams that don't have access to this S3 endpoint, they're not going to be able to work with this data, no, because we have to access it through our multi-cloud gateway object, no, that's where namespace buckets come into play, no, that with namespace buckets, we are really working in plain text mode, no, so they're really good for sharing our data sets between different teams and different endpoints or, for example, endpoints from different cloud providers, no, so with namespace buckets, we are really working with data federation in the sense that we can have a virtual bucket presented to our developers that is an aggregation of many different cloud resources or resources on premises, no, but there's really no data movement, no, it's not like in the data buckets that you have mirroring and your moving data, here it's just we are creating an aggregated view for a certain developer and we are making sure that he has access to read all the data that he needs because we can read from different sources and then we can write to one of the data sources, no, so we also have use cases like lazy migration that we can work with, no, so it's really a cool feature, the namespace buckets and introducing what we have just been speaking about, this new feature, this is called namespace buckets on top of file system, no, so we are able to configure namespace bucket and consume a shared file, a legacy shared file system, no, that is what I'm going to show you just in a moment. Okay, so while you're saying that, just to be clear, so you have some external legacy file system, you're not changing it, you're not moving it, you can't do anything about it, but you've got applications running inside of OpenShift and you want to be able to just like make this experience for your developers very consistent, right, rather than saying, oh, we've got a amount, you're just saying, no, here's your S3 endpoint, here's the bucket you need and you're off and running and this is, so the, and that comes through on the namespace side because it is, it is a, it's data federation really, right, okay, all right, so, so hopefully you've got a demo. Yeah, let me try and, and see how it goes, let me share my screen again, I will share the entire screen and I will move. So can you see, you ready? Okay, I'm sharing, all right, go ahead, yeah, can you see this? Okay, is this size of font more or less? Okay. Yeah, it's pretty small. Let me make it a little bit bigger, like that, it's a little bit better. I think that's better, yeah. Okay, so first as an introduction, I have like an OCP cluster, let's just check the, okay, so now I see nothing, oh wait, there it is, never mind, no, it's just delay, I see it at the top, go ahead, thank you. Okay, yeah, maybe it's going a little bit slow and also we have an OCP cluster deployed and we also have ODF on top, no, so we can do a self cluster just to check that we have an ODF deployed, maybe we can also check our Nuva ports now that are running, so really we have an OCP, an ODF deployed, OCP is deployed with version 4.10.5 and ODF with version 4.10, no, in a depth preview because we are not GA and what I wanted to show you here is that we are going to work in this demo, no, it's a very simple demo, but what we have is an application that is a legacy application that has a persistent volume, so let me just get the project and we can just check what it's running, so I can show you, so let me clear my screen, so we have this legacy application with a port, very simple application that is writing JSON files to a shared file system with CFFS, no, that is part of ODF and it also has a PV persistent volume claim that is using with CFFS, no, that we have it here, it's our CFFS PV legacy and what I want to do right now is just go into this legacy application and just go into the shell a moment so I can show you the file system, no, we do it with the FFH here, we can see our CFFS file system and that it is mounted at the data path, no, so we can go into data, we do an ls here, we have a folder application and inside that folder is where the application is leaving all the data that is created, no, this is just registries in JSON files that the application is created in our legacy application, so now, okay, this is fine, we have this running, our application is creating new registries but let's say that we want to onboard a new application that is more cloud focused and we want to use S3 API, no, to consume this dataset that is being generated is still in a legacy application that let's say that we can migrate right now, no, so that's when namespace file system buckets come into place and the steps that we have to follow, no, to get this work in his face, first we need to create a namespace store, no, so let me space store, I'm just going to get it here so I don't have to spend all the time typing off from the history and let me walk you through through this command, make a little bit bigger, yeah, thank you and walk you through this command so really here what we are doing is we are creating a namespace store, so this is related to namespace buckets as we mentioned, we are doing a create and then we are specifying the type, know that the type is going to be namespace file system that we're going to use, then we're giving it a name that is legacy namespace, that's the name of the namespace store once it's created and then we're also setting the pvc target, no, so this pvc target is actually the pvc that we are using in our legacy application, no, so we have to specify to which legacy share file system we want to access, no, and then finally we are specifying our file system back here, no, that in this case is ffs but we could for example have an nfs back here, no, that we wanted to access, so that's great, wow, actually that's really useful, like I can, I'm impressed, okay, so let's see it in action, okay, so you created your backing store, your namespace store, excuse me, your namespace store, okay, yeah, first step namespace store, it's great, let's do a quick check just to see if it's in ready state, know that everything went well, okay, notice I just want people watching to notice that you're using the Nuba CLI to do this, you're talking directly to mcg upstream, which is Nuba, this is the CLI for it, okay, as opposed to OC, like just so they know, there you go, yeah, why is it in rejected state, let's have a look, awesome, things are good when things go wrong, come on, yeah, we need a little bit of, so let's see, maybe sometimes it goes into rejected state and when it does, it goes through the operator again, it puts it in the phase ready, but we can just do a quick thing that just in case, let's delete it, okay, and if not we can take a look at actually what is going on the logs, but let's do this quickly and just give it another shoot here, we have everything ready here, yeah, and this is the namespace, so let's give it another go, the thing is that it's saying here that it's in phase ready, but then once it goes through and analyzes it, maybe there's something going on, okay, let's do a list again, let's get in rejected by, I'm not sure why, let's go into the, let's take a look and see if we can find what is the issue, if we take a look at the ports and we go into ROOP, okay, and we can take a look at what's going on in the core, this throws out a lot of information, maybe we have to use another, another window, but there's actually nothing red here that is saying why this PB is getting rejected or no such key, but no that's not the, the main issue that we are seeing, okay, let's also take a look at, let's clear the screen, let's just do, I think here that we're going to leave this window with the log and we are going to just get the new one and we can try and see what's going on, okay, so here we have the logs, it's the namespace store, it's deleted without any issues, nothing is coming out here, no bug is in the written state, okay, and let's get it, let's create it again, create pbcname, backend, everything is fine here, and let's see if we get any clues of what's going on or if we get any error connection. Do you care about this, there was something rather went by delete, no such key, something, but, yeah, here, no, then, yeah, what is that, the specified key does not exist, hosted agents, yeah, but this seems to be related to the default namespace back, not to the default Nova backend store that is using that, we are going through another one, but yeah, it seems that it's not leaving any logs here, so I'm not quite sure what's going on, let's do nls just a moment again, we have it here, no, Nova, namespace store, and nls, sorry list, it says ready, now, yeah, this is what I have seen sometimes that it has to go through the reconciliation of the actual operator to get this into ready state, so that has happened to me a couple of times, and it's just a case of waiting like a minute, and it will get into ready state, so maybe we weren't really waiting enough, okay, so that's good, so we have our namespace store ready, and then the next step that we need to do is create an account, so we need to create an account, so we have, let's say that we are, this is a new account, okay, okay, yeah, we're creating a new account, and again, walking through this new account, what we are doing is creating a new account as we have here, then we are allowing the user to create buckets, this is optional, but let's say that is important to note, there's not that if we are allowing the user to create buckets, and those buckets are going to be translated into folders in the namespace that we have created, so once we go into our legacy application, the buckets that have been created from the S3 endpoint that we're using, we are going to seal them as a normal folder on our shared file system, not just to take an account, then we also are setting the default resource, that this is the legacy namespace that we have just created, the namespace store that we have created, we are setting it here, okay, new bucket path, with new bucket path, what we are saying is, okay, we are allowing this user to create buckets, and at what point in the file system, in the shared file system tree, do you want to create the new buckets? So we're going to give it the root of our file system. So if you had, if you were, sorry, if you, if you were creating a new user and it was a team or something like that, and you wanted to give this account, you could have prefix it with something slash group one slash group two and keep it all organized. Okay, okay, that's amazing. Yes, we are going to see now once we have everything working and we can go to the legacy file system, we can create a bucket and we'll see what it will look like once we are there. The next two flags that we have here are, we are telling that this account is going to be dedicated to this new feature, no, no, to the namespace file system, so we have to set both of them to true. And then finally, an important part that we have here is the user ID and the group ID. So if you take a look at the UID that is this huge number, the recommendation would be to set the UID, the user ID to match or the user ID that you have on your shared file system. So if we go back to our legacy application, that is where we are now, you can see that the user ID for all of our JSON files that we want to work with with our data set are using this user ID, that's why in this command, I am setting exactly the same one. So what that gives us is that when I write files from S3 or when I want to consume files from S3, I'm not going to get any permissions issues because I have the same user ID for both of them. Okay. Okay, so let's go ahead and create the user. This is going to give us like an access key and a secret key for authenticating as this user. So let's save it for later. I'm just going to save it here, so I don't lose it. That's the second step that we have. And we just mentioned one step that is creating this virtual bucket that we are going to present to the account. So again, it's a Nuva bucket that we are going to create. Let me just get it from here so we don't actually have to write everything down. Okay, so this is the Nuva bucket that we're going to create. So we have the name of the bucket that is called legacy bucket. And because we are using namespace buckets, we have, like you always do with namespace buckets, you have like a write resource and a read resource. In this case, it's going to be the same. So in both cases, we're going to use the legacy namespace store that we created. So when this user or the final user accesses this bucket, he's going to be able to see the JSON's file, not that we created. Okay. And finally here, we have the path that, as you can see, is JSON. Really what we are doing here is replicating the path where we have the files. So if, for example, we take a look here, we have data and JSON. So what we are doing with that JSON slash is that when a user goes into that bucket, he's going to have this view, not that it's really what we're interested in. So let's create also this bucket. Okay. That seemed to work. And now really, we have all the configuration in place. We are just going to use an S3 client to check the view. Now going through the multi cloud S3 endpoint. So I'm just going to use. I'm going to mute my side because my dogs decided to Okay. Okay. That's fine. So yeah, I'm just using a standard AWS CLI client not to check if our So again, the access key. And I'm also going to set the secret key. Are you able to hear me? Okay. Right now. It just got a little choppy for a second. So you may need to repeat what you just said, but this looks like standard AWS access to an S3 bucket, right? Am I? Yep. Yeah. This is completely standard. Completely standard access. So we're just using an AWS CLI client. The only thing that I'm using an alias that is going to make our life easier, but this is you could use any S3 client that you would like to use. That would be perfectly, perfectly fine. So let me just set my access key here. And the other thing that I'm specifying is the endpoint. So this is the multi cloud gateway endpoint that I'm going to that I'm going to use. No. Okay. What is the name of the bucket in there? Are you just setting the alias, right? Or Yeah, no. Yeah. So there's no bucket name. We are going to check now the name of the buckets. I'm just using the S3 module. So now if I want to run a command here, I will, for example, have to do an LS. Now this is going to list the bucket available with this user. So well, so ignore this error that this is because we are using nobody files a cell. But as you can see here, when we list with our client, the buckets that are accessible with this user that we have provided access key and the secret key, you can see that we can see the legacy bucket. Now, and if we take a look at what's inside the legacy bucket, if everything is working, okay, we should have our JSON files that are available in the legacy file system. Now, as you can see here, we have access to all of the JSON files that are the ones, the same ones that we have here now. And this is also so we can read and write. So for example, if I do a copy of one of these JSON files, not just as an example, I'm going to do a get of one of them. So I'm going to bring it to my laptop, maybe instead of my laptop, it could be an application and you want to work on this JSON file. Let me just see if everything is in place here. So for example, do a jq, so it looks a little bit better. Okay, so yeah, so here we have all our registries now. So this is working fine. That's doing a get. No, so we are really pulling, but we can also push or put objects into the shell file system in a traditional way, like you will do with the client. So okay, so sorry. All right. So when you, so the user that we, you created that is allowed to look at legacy bucket, can we take that same user? We restricted that user in your options. If we didn't want to, let's say, like, could I give the same Nuba user a view on this kind of legacy bucket on a data federated bucket? Like, can I just pick up that user and let it see other things? I think that you can't know that when you select a user for this kind of namespace file system bucket, it's only works now with the namespace file system bucket. So you can't mix like you will do with other namespaces different backends and you can't play like that. No, you are like forcing this user to be a file system namespace account. And the other thing that I just wanted to show you is also if we, for example, want to do a boot or upload some stuff to from our S3 cloud native application into the legacy file system, we can do now nlx here inside the JSON directly and we can see the file that I just uploaded. And because we are using the same UID when I, when I do the copy from my S3 client, it's landing in this file system exactly with the same user ID, you know, that is really nice because then we are not going to have any issues with related to permissions. And as you can see here, we're able to access our file. So this is a little bit what I wanted to show you. It's true that we can also, if you like, we can very quickly, you can create buckets because we give permissions for this user to create buckets. So for example, I could create here just a quick test one bucket just very quickly. And so we can finish, as you said, as you can see here, I was able to make the bucket that were fine. So now if I go into our legacy application at the data level, because we do remember that when we created the account, we specify that when created a new bucket, it will be at the at the rule level of the shell file system. So as you can see, we have a JSON application with one folder with all the information and now the new bucket that we created. And we can still keep working together or so I could create here a file and that file would be available on the new bucket that I created on the S3 endpoint. So really nice feature for working between legacy applications with a big shell file system, for example, that we have data sets there for maybe machine learning AI and being able to access that same data through an S3 endpoint. That's a little bit the idea behind this feature. So is this is this integrated into the console or no? Yeah, so I forgot to mention that so almost all of the steps are integrated into the console. So you can create the namespace file system namespace to the console. You can also define the let's say the bucket, but you are still missing the creation of the accounts. So this is a specific Nuva account that we created through the Nuva CLI can be done or through the Nuva CLI or using a CR. So at the moment, it's still not available to create that specific user. For the namespace use case that we created, it's still not available at the moment. Okay, all right. Wow, fantastic. That's amazing. So just to recap, namespace file system buckets, it's always important in MCG to get the terminology right. So namespace file system buckets are GA in the next maintenance release or in the next, is it GA now? It's GA with when ODF comes out in 4.10, it's going to be GA straight away. No, the one that we were mentioning that we have to wait for a maintenance release is the regional DR. No, that's the one that we have to wait for a little time. Wow, that's a lot of new stuff. Fantastic. It's really great. Okay, so maybe we, the two of us can think of some demos that we could put together to really illustrate how, especially for the namespace, a file system buckets could be consumed by AIML and you can consume it from both sides and that kind of like working with legacy stuff that's really, my mind is already thinking. I'm like, oh, that can be very interesting to show people how to use it. So, okay. So questions, do you have, do you have any, we have a little bit more time. Do you have more that you want people to know about for new features or anything you want to talk about in a future show? Oh, you're a little bit of choppiness, but now you're back. Okay. That's more or less. That's it. That's a lot. Actually. All right. Fantastic. Okay. Are you hearing any choppiness from your side? Yeah, I see a little bit of lackiness. It's gone away now, no, I think, but yeah, I see you jumping just a little bit, but it's fine. I was just like for some seconds that they were cutting out a little bit. It happens. Okay. All right. Well, that was, that was amazing. And I want to thank you so much for coming onto the show and like, I'm, I will definitely contact you and like, we'll talk about demos we can do in the future and maybe do deep dives into one feature and sort of pair it with other features, especially I'm an MCG fan. So I would love to do something like that. But thanks so much. That was really fantastic, Daniel. I look forward to having more shows in the future. It was awesome. That's great. I had a good time also. So that's really nice. So yep, so we can keep speaking and see you in later on. Fantastic. Okay. So with that, I'm going to sign off and say thank you, Daniel. Thank you everybody and join us in two weeks for another ODF office hour show. Thanks. Take care. Okay.