 the data services office hour. I am Michelle de Palo, your host, and I'm here with Daniel Parks. Daniel, nice to see you. Hi. Nice to meet you. Talk to me. You're regular on the show, but talk to me about what we're going to go through today. Yeah, so the idea today is cover a little bit or speak a little bit about multi-cloud gateway. That is always a topic that we like covering. And in this case, we wanted to give an example or a use case with a demo of how multi-cloud gateway can help you, like increasing your data residency when using object storage, you could say, with going hybrid. No, so we're going to have an application on premises, but we're going to be replicating also to the cloud so we can have a use case and an example of how multi-cloud gateway can help you with this kind of scenario. Awesome. OK, so can you give us a refresher for those who are not so familiar with the multi-cloud gateway about, like, are we going to be namespace buckets, replicated buckets? Like, do you have an overview? Yeah, that's a good point. Let me share my screen. And even if we don't like to go to spend too much time with the slides, let's just cover three or four slides to give an introduction. I guess we are a little anti-slide here. OK. Let me share my screen and tell me if it's OK. That's OK. You can see it. Fine, so just very briefly, again, just the introduction of MCG of the multi-cloud gateway is part of ODF, no, of OpenShift Data Foundation. When you deploy ODF, it gets installed with ODF. And I would also bring up the thing that if you have a use case where you have an OpenShift cluster and you're working on a cloud provider and you only have a use case for object storage, you can deploy MCG in a standalone mode. So you can also save some resources by avoiding the deployment of the full-serve deployment and all of the CSI drivers that really they provide you access to block and to share file systems. So we'd write once and we'd write many. If you don't need those kind of storage accesses and you only need an object, you could also deploy MCG standalone. That's actually something that you can do now with ODF 4.10. 4.10, OK, I was going to say that wasn't true before, but now in 4.10 it is. Awesome. So that's a good point just to save some resources. And then going into MCG, really what's MCG? So MCG is like an S3 compatible endpoint that offers a consistent experience to the developer, to the application, no. I mean by this that you're always going to have your S3 endpoint. You're going to have your bucket with your access key and everything is going to work normally. You can leverage your object storage. You can do bootstages and everything is quite simple and consistent from the application or the developer side. But then on the backend, behind the curtains, we have like a really flexible deployment with MCG with many features. We support a really big array of backend stores. We support from on-premises where you can have Radeos gateway or you could have even persistent volumes that you could use as the backend store from OCP or going into the cloud and the cloud providers. Now, we also support Azure, IBM, and object storage. I think you and I, we actually showed using a persistent volume as a backend store in a previous show, the one where we did MCG standalone on the edge. So look for that, just telling you. Yeah, that's a very good point, yeah. So, as we mentioned, you have like a huge array of backend stores, but also you have data policies by which you can configure and you can play with those backend stores and you can configure a packet replication, mirroring, different spreading between the backend stores. So you can also do local caching that there are many things that the storage administrator can do in the backend. And in the front end, the developer wouldn't even notice. No, it's completely transparent for him. So this is really nice what we multi-cloud gateway provides. I'm going a little bit into how an application or how a developer interacts with the multi-cloud gateway. It's a little bit, it works the same as it would be with persistent volumes and persistent volume claims for people that have already worked with stateful applications in Kubernetes or OpenShift with multi-cloud object gateway. We have what we call object bucket claims that it's more or less the same philosophy that you will have with PVs and PVCs in the sense that when you are deploying your application, if you need object storage, you can also create an OBC object, an object bucket claim. And that is going to in turn in Nuba that is the upstream name for the project that we have for MCG, just in case you hear Nuba or you see some posts that are called Nuba that's the upstream name for the project. We are normally going to call it MCG. So MCG, when a user creates an OBC, what is going to do for you? It's going to create a dedicated bucket for that application, for that developer. And it's also going to create a Nuba or MCG account that is going to give you an access key and a secret key that is going to provide access or is going to give you a way to authenticate against the S3 endpoint and start using the storage. Another really nice thing from OBCs is that you can get into your deployment, into your posts, the access key and the secret key in a dynamic way using environment variables. So really when you create an OBC, in the namespace where you create that OBC, a config map and also a secret is going to get created with the data that you need to access that packet. So you're going to get the endpoint, you're going to get the access key and the secret key. And it's really easy to introduce that into your deployment config or your deployment and use it to actually access the object storage, access your packet and start working with posts, gets whatever you need to do with your object storage. All right, so question. So, okay, so as I remember from what you're saying, from the development point of view, when you use OBC, OV, like it's very, you do next to nothing, the config maps are done for you, the secrets are done for you, you get in the environment variables that you need to access it, are the buckets transient? Like they could, do they go away after, they're not persistent, correct? Or can that, is that tunable or is the idea that it's a bucket that pops up for you? So as soon as you're done with it, it disappears. Yeah, it depends in the policy that you configure by default. If once you delete the OBC, also the bucket would get deleted. So it really has the life of the OBC. Once you have finished using that OBC, if you go ahead and delete that OBC, the bucket that is storing your object storage would also get deleted. Okay. So that's something that you could also configure now for it to be deleted or not, once you decide that the OBC doesn't want, that you don't need it anymore. Okay. And then just the last two slides that I have, that we wanted to speak about and give a little bit of context so we can later better understand the demo, that we're going to show is speak about the two different bucket types that we have in MCG. Because we have data buckets, as you can see in this slide. And the thing is that when you deploy ODF and MCG gets installed by default out of the box, a backing store is going to get configured for you. This is really nice because you can start using MCG without actually having to go into the configuration and start configuring a backing store. It's going to get deployed for you by default. And the actual backing store that gets deployed depends on where you have installed OCP and ODF. Meaning that for example, we have deployed ODF and OCP in an AWS deployment. MCG is going to create a default backing store of AWS S3. So we are going to start using, we're going to start storing when we add objects to AWS. If for example, we do an on-premise deployment, it could be VMware or bare metal, by default, the backing store that is going to get created for us is Radar Skateway. That Radar Skateway is the object service that is offered by CF, by ODF. Not so really the objects would be stored on-premises, not locally there. Okay, but from the front end, the developer doesn't care. It's there, it's working. That's really from the data engineer and your administrator's on, right? So if you need something complicated on the back end, MCG allows you to have that, but the front end is still simple. It's just gonna look it. Okay, all right, just checking. Yes, yes, from the user, let's say user application experience is always the same. Now you have your request for an OBC and that OBC is going to give you everything in your namespace that you need to access. The only thing that we're going to see on the demo is that you can have different bucket classes and then the application of the developer can ask for a certain bucket type. So you could inform as an storage administrator to the developer. Okay, we have three bucket classes that can do different things that we're going to show now. They have different properties and you can choose, depending on your use case, to use one bucket class or the other. In the same way that if we're speaking about persistent volumes and persistent volume claims, you also have storage classes where you can have different storage classes that point to different storage appliances or different storage configurations. So we have the same concept that we're going to show in the demo of bucket classes. Going back to data buckets. When using data buckets out of the box or when a user uploads an object and he's using a data bucket, the data is going to get manipulated by MCG in the sense that it's going to get de-duplication, compression, and also encryption. We're going to have encryption on transit and also at rest. This is great because out of the box, we get all of these new features, but we have to take it account that because the data is being manipulated and moved around by MCG, that we can only access that data from the ODF OCP deployment where that MCG is deployed, where we have that S3 endpoint. So if going into the AWS example, if we have an OCP deployment where we have an AWS backing store and we have uploaded some objects to a data bucket, if we try to access the same data through the S3 endpoint, so going directly to AWS into the bucket in AWS using their servers, we could see the bucket. You're going to see it, but you're not going to be able to use the data because it's encrypted, it's chunked. So really, MCG is manipulating that data and taking care of that. So that's an important thing to take into account with data buckets. And then this is just very briefly that also with data buckets, we have data policy and we could do multi-cloud configuring and mirror between AWS and Google. And in this sense, again, for the end user, the application is completely transparent but in the backend, an object is going to get copied to each of the clouds. So we are already working with multi-cloud and we have one object stored on each cloud provider. We could also go hybrid. Again, we could set up a mirror between our data buckets and each object that the user uploads would be stored on premises and also on AWS. And we also have multi-site, no? But that's a little bit the data policies that you have with data buckets. But going into namespace bucket, so we don't take too long, the main difference that I just wanted to point out with namespace buckets is that MCG is not touching the data in any way. So when a user uploads an object to a namespace bucket, it's going to be stored on the backend store as plain text. So it's going to be exactly in the same way as it was on the source. There's no deduplication, there's no compression, there's nothing like that going on. So the object is just going to get stored as it was when it was unloaded by the user. Okay. So if you needed encryption and if you wanted to have it encrypted all of the benefits of the data bucket names, you can't do that with namespace buckets or you would have to take care of that part yourself. Okay, so it's important to be aware of what each type of bucket offers you. So that too much needs to be loaded. Absolutely right. No, so the big advantage or the really nice thing about namespace buckets is true that we use encryption, we use deduplication, all of those good features, but you have the advantage that in this case you can access the data from the cloud provider. So going back to our AWS example, we could perfectly go with a namespace bucket to AWS and through the AWS S3 endpoint, we could access the data that we have uploaded to our namespace bucket. So we could read, write, work with it in any way. That's because it's just plain text. There's no manipulation or there's nothing being done by MCG to those objects. So even if we, for example, completely lost our OpenShift cluster, we could still go with an S3 client to AWS and have all our data there available. That's a really nice thing that namespace buckets provide. Awesome. And then this is the last slide that I have and this is going to explain just at the high level the demo that we're going to go through. And here we want to take advantage of all of the things that we have been mentioning. So what we have here is we have two OpenShift clusters, one on premise and the other one on AWS. And what we're going to deploy is an application. In this case, it's a very basic photo album application that uses object storage. And we're going to configure or we're going to use as the backend store for our photo album application that we're going to deploy on premise. We're going to use a data bucket. We're going to use a data bucket because we want to, you have encryption, we want to have all the goodness, know that we get from a data bucket compression and also deduplication. And also we want to make sure that we're storing all of those objects on premise. So by default, the application, the photo album, when he uploads an object, it's going to go into a data bucket because this is an on premise deployment. As we mentioned before, the default backend store that gets configured is going to be radius gateway. So all of the objects are going to be on premise. But then we can go a little bit further and say, okay, and what's going to happen if I lose with a disaster, I lose my OpenShift deployment, I lose my cluster or there's a problem in ODF. So I want to give a little bit of more resiliency we could say to our data. And then I'm going to say, okay, I'm going to configure bucket replication between our data bucket and a namespace bucket. By bucket replication, what I mean is that anything that the application, our photo album application gets uploaded to the data bucket is going to be replicated to our namespace bucket. And this namespace bucket is going to have as its package store AWS. So really anything that we upload to our data bucket is going to get stored on premises and it's also going to get stored on AWS. With the advantage that because we are using namespace buckets, what's stored here in AWS is completely in plain text. That means that we could access this data through directly the AWS, not for an S3 client, or we can also access that data from another OpenShift cluster or another Kubernetes cluster or anyone that has access to a bucket. So then what we are also going to do is we have an OpenShift on AWS and we are going to configure on that cloud OpenShift that we have deployed in a namespace bucket pointing to the same source. So this namespace bucket and the one on AWS and on premise, they are pointing to the same data in AWS. So we're going to have the images available on the on-premise cluster. I mean by images, I mean what we have load through the photo album. We're going to have it available on-premise and we're also going to have it available on AWS on the cloud. What an easy way to replicate. I mean, it's easy, right? And to change things underneath without your users knowing what's going on so that you can manipulate, like if you have an outage or you wanna add another bucket or something like that. Okay, so I wanna see it. Are we ready? Let me see it. Yeah, okay, let's go ahead. So let me know if this is more of a decent size for the fonts. Yep. Okay, so what I wanted to show you first is just so we know I have two context configured. By context, we mean that with the QCTL command or with the OC command, we are going to have an on-prem and I also have on AWS. Just so you know, and I will tell you that sometimes we're going to work on-prem and sometimes we're going to work on the OpenShef cluster on AWS. We're going to start with the on-prem cluster and here just to give you, just so you see that it's an OCP cluster. We're using OpenShef version 4.17. Also, how can I show you? Okay, that we are actually working in on-premises. We could do something like this. Yeah, so this is all our master on working nodes and really what I wanted to show you is that we're running on G-Sphere. Just to show that this is our on-premise cluster, okay, that we have deployed and we're working on-premise. And just finally to give you a little bit of context, we can also check that ODF is deployed. So as you can see here, we have already deployed ODF, it's up and running and we have our storage cluster ready. Okay, and just as a reminder people, this is a full ODF deployment. This is not MCG stand-alone, because we're using the Radius Gateway, we need Ceph, Ceph is in place, this is on-prem, MCG is there as well, okay. Yeah, thank you for that, yeah. I will do also know, also we can see that we have all the full array of things. Now we have everything related to Ceph, we have Nuba, that is MCG, so and all of the CSI drivers. So this is a full ODF deployment. Okay. Okay, so going a little bit into Nuba and you brought up a good point there, Michel, about what gets deployed by default. I just have deployed ODF, I haven't done anything, but I wanted to show you the backend store that gets installed by default, not that we just briefly mentioned before. And also I wanted to bring up that I'm going to use the Nuba CLI. You can work with directly with the CRs and CRDs, not available in OpenShift, but the Nuba CLI does some work for you and it's quite nice to work with, so I prefer to use it in my case anyway. Okay. So let me just show you backend store list, what gets deployed by default. So as you can see here, without doing anything, we are going to get a default backend store that we mentioned, and because we are deploying on-premise, it's actually going to be a Radar Skateway backend store. Here we can see that the type is S3 compatible. We can also do an status that is going to give us a little bit more detailed information and there we can see that we're going to use the Radar Skateway endpoint. So here you can see that we are using RDW, this is the Radar Skateway endpoint provided by ODF, by SEF in our deployment, just to remark that we are deploying on-premise. If we store something into this backend store, it's going on-premise and it's going to also use a data bucket. So we're going to get compression and duplication and everything related to that. And the last thing that I wanted to show before we get into actually configuring things is the bucket class list. And just to give a reminder, the bucket class is actually kind of an storage class. So what we can do when we actually need to use a storage, we are going to use an OVC, an object bucket claim. And what we can do is that we can select which bucket class we want to use with our OVC. So for example, I wanted to use this one that gets deployed by default that is actually pointing to what I just showed before, not the default backend store. We could mention this OVC and we're going to show it in a moment how we actually reference that in an OVC deployment. Okay, so everything that I showed at the moment gets deployed by default when you install ODF. So the first thing that we want to do, and I'm going to go into this slide that maybe it's just a little bit more complex, but I just wanted to show the steps that we are going to fulfill so they make more sense to you. So the first thing that we are on our own premise, OCP cluster, the first thing that I'm going to do is deploy the photo album application using an OVC. And the OVC is going to be using the default backend store that I just showed you, that is actually going to read the skate way. We're going to use a data bucket with this OVC and it's going to go to read the skate way. So everything that gets deployed, so every image that gets uploaded to MCG with the photo album up is actually going to end up on our radar skate way on premise. Okay, so let's go ahead. Wait, quick question. So in that diagram, after the photo album where you have MCG endpoints, that is, that's the internal endpoint, right? Just to clarify, there's an internal endpoint and an external endpoint. I assume the photo album is going to use the internal endpoint. Yes, that's a very good point. So when you are working with applications that are inside your OpenShift cluster, so there are applications that are running on your OCP, you are going to use the service, so that we will call the service or the internal endpoint. If you need access from outside to the S3 endpoint, you will use the route, that you will go through the English route and then you will use that. So for all applications, we always recommend going directly to the service because you can also, let's say, reduce some costs. For example, you are deploying in AWS and you are using a route, it's actually using an elastic load balancer and you get also charged by the hits that you get into that elastic load balancer. So that's something that you can save if you're on a cloud provider, you can save some money using always the applications. Yeah. Okay, thank you. Yeah, so then what we want to do first, as I just mentioned, is just deploy the photo album on-premises. So I'm going to use a script to deploy it, but really all that is done with the photo album that we will also share the link to the Git repo. The only thing that it does here and that I want to show is that it creates this object. So it's going to create, as you can see here, an object packet claim that we have spoken about before. And then I just wanted to share with you this part here. Now what? The name of the object packet claim is going to be on-premise photo album, OBC on-prem photo album. So we are sure, and we also know just that we are using on-premises. And the thing that I wanted to point out is the bucket class. So using inside the specs, this additional config, we can select which bucket class to use. At the moment, we're just using, we just have one as I showed you before and we're going to use that default bucket class. But when we have- If you didn't specify that, would it default to that in this case? Okay. Yeah, it defaults to this one. So this is really a little bit redundant, but I just wanted to show and make sure that people know that we're actually accessing this one. But it's true that if we removed it, it will work also perfectly now. Okay. Yeah, so we have an OBC and that's going to deploy an OBC to this new default bucket class. And then the other thing that it actually does is a deployment config. So it's just going to deploy an application, our application. And I just wanted to show here also how it uses secrets to get the access key and a secret key. This is when we were mentioning before that it's really easy for the developer or somebody that is deploying an application to get access to the access and the secret key from the application. That's an example that we have there. Okay, so all you need to know as a C under ENV, you just need to use endpoint URL, bucket name and the regular AWS access key stuff as long as you have them and you just let it populate. Okay. Nice. Yeah. That's just done automatically for you. Yeah. Yeah, once the OBC is created, everything gets populated. So that's really nice because it's really dynamical in the sense that we can have everything as you can see here, we have it on just one Jamel. No, the first, the object bucket gets created and then the deployment config, everything done dynamically and it's really easy to automate in that sense. Okay. And it's keying off the name of the OBC. Perfect. Okay. Great. Yeah. Yeah. The secret has as you mentioned that the name of the OBC on the project or an age space where you created the OBC. So I'm just going to run this with a script but really the script, the only thing that it does is just first make sure that it deletes everything, no? And then the only thing that it does that you can see here is actually apply or create the app.jamel that I just showed you before. And as you can see here, the object bucket is going to get created, also our deployment and the service, no? So let's check that everything looks okay. So as you can hear, the OBC has been created. We have called it on-prem so we don't get confused later on that we're also going to create another OBC and it's bound. Now when we see it bound is that it's it at least has been able to create all of the Nuva bucket, all of the stuff that it needs to create. Let's take a far actual application it up. Okay. We can see that it's running. Next thing that I'm going to do is get the route because I just want to show you on the outer this really amazing application. All that is amazing photo application. Okay. So you're going to upload something. So this is what we have. Select a photo that we can select. I'm just going to select anything here and upload it. And really when I'm doing this upload, going back to our diagram, let me make it a little bit bigger. What I'm doing is actually uploading to the MCG endpoint. It's going through the OBC and it's getting stored on our package store. So the first part of our deployment of our deployment is ready. Our application is working. Everything will be great. But now we want to take it a little step further and we want to also give it some redundancies and high availability or increased reliability. We could say to our object storage. So what we're going to do is now add this part that we have here. We are going to first add the OBC, a new OBC that is going to use a namespace bucket. And a namespace bucket that is going to use is an AWS. Okay. So now I'm going to show you this part. We're going to create a namespace store. Then we are going to create a bucket class and finally the OBC to use it. So let's go through these steps. And once we have these steps ready, then we can configure bucket replication. It's important to notice that bucket replication is configured at the OBC level. So this is quite nice in the sense that a normal user could configure bucket replication for his things. Now, if he wanted to do it without the admin, he could perfectly do that by himself if he has enough permissions. That's if you're doing bucket replication at the OBC level it's true that there's also, we can set replication at the bucket class level. But today we're only going to see OBC and take an account that this could be done by the actual owner of the application, let's say. Okay. Okay, so the first thing that we need to do then is we need to create a namespace store. Let me do a clear here. So we have a namespace store and let me get the command and we can go through it. And I will explain a little bit what we're doing here. So using the Nuva CLI, we are creating a namespace store. Okay, so it's going to be for a namespace bucket, create. This is the type of backend store now for this namespace store that I'm going to use because I'm using AWS. I am using and selecting this type, no AWS S3. And for example, we will use rados gateway that you could also use it or any of the other backend store that we can have available, you should change this type. This is the name that I'm setting. This could be anything but I'm just setting it as AWS S3 namespace. And now I'm going to enter the access key and the secret key for the bucket that I have already set up in AWS and that I'm going to show you. So let me just go here a moment. This is the S3 console, no, in AWS. This is the AWS UI. And as you can see, I have previously set up here a bucket. It's empty, there's nothing here but I have just set up the bucket. So we also have an access key and a secret key and we don't have to spend time with this. But really what I'm doing is I'm configuring the namespace store to use this bucket as its bucket store. So let's go ahead and create it. This is one nice thing of use in the new RCLI that for example, the secret that you need is created for you. The secret that you need when you store the access key and the secret key is stored for you. If you do this with CR and CRD, it's not with the OpenShift CLI, you have to do it in two steps, for example. Okay, so we have our namespace store. Now we need to create the bucket class. So again, this is our command. We are creating a bucket class, create bucket class. It's a namespace bucket class because we are using namespaces as single. It means that we are only using one namespace store because we didn't mention the side of the really nice features that we have in namespace stores that we can also aggregate several buckets to our bucket by offering just a single bucket to the end user, but we won't, because we're not going to cover that in the demo, we can just ignore it for today and maybe we can go back to that topic another day and show also that really nice feature now that we have with namespace buckets. And could I add later on, let's say I thought you could add them later if you start with single and you just want to add another one, you can just add another one. Okay, awesome. Yeah, that's a really good point. As always, as we say, offering that consistent experience to the user, he won't even notice, but you could add more namespace stores to this same bucket afterwards and you can also chop and change, as you need without the user noticing. So yeah, so we have this bucket class. This is again the name. So I have choose AWS S3 bucket class and then we are setting the resource. The resource is just the namespace store that we created a moment ago. Now that if you remember, we gave it this name and then finally the project that is open to storage. So let's create this bucket class. Perfect, so now if we do... So if you list the bucket classes now, we should see, okay, two of the default and this new one. Okay, let me, I always forget to find the project. So just as you mentioned, Michelle, we now have the one that we created by ourselves. Now that this is the VC that is also using, as you can see here, namespace now. So we are using a namespace store and we also have the default one that we used before. So if you take a look, our photo application is using this one and that is on premise. And now we have this one that we could say that's in the cloud or in the cloud provider. Okay. Now that we have this configured, what we want to actually configure is the OBC, because we need an OBC to do bucket replication. So... So you're gonna create the claim, the object bucket claim, which in turn creates a bucket, but it's all gonna use a bucket class of what we just had, the AWS one. Okay. So as you can see here, exactly the same or very similar as the one that we used before is an object bucket claim. The big difference here is that we are, instead of calling it on prem, so we are clear that this is on AWS, I'm using the AWS name. And then the most important part is that the bucket class that we're going to use is AWS S3 bucket class that we just mentioned before, no? So... Perfect, okay. We are sure that this is going to use what we have in AWS. So just create it. And now let's do an OBC. And as you can see, we have the on prem one that we had before and where our photo album application is currently writing, where it's uploading objects. And then we have the bucket on AWS. That's perfect, we have everything set up. Now the thing is that we want to configure replication. So let me just go a moment to the diagram again, so we don't get lost. So we have everything that you see here. Now we have it ready. We have our OBC on prem, we have our OBC on AWS. And now we want to do this part here that is configuring bucket replication. We are going to configure bucket replication in one direction. So what we are going to do is that we're going to set bucket replication. So when we upload something to the on prem photo album, it's going to get deployed on AWS and it's going to be accessible here in AWS, no? Perfect, so let's go into that step. It's true that we could have configured bucket replication straight away, but I wanted to do it into steps just so we better understood, know how it works. The thing is that now we want to edit our on prem OBC because I want to configure the replication from on prem to AWS. So just let's edit this and here in the additional config, we need to add an extra line that I'm going to get that I have prepared here to line with replication policy. And now I will explain it so we better understand. So what we're doing here is we have set in a replication policy for this OBC. The rule ID can be anything but I'm just trying to set something that makes us better understand this now. So really what we're saying here is that we are replicating in the direction from on prem to AWS and then we are setting the destination bucket. Now this is the destination bucket where we want all of our objects to be synced copied or whatever we want to call it that in this case is the AWS OBC photo album that we just created a moment ago. So this is really going to be our namespace bucket where we are going to do the replication. And finally. I was just about to ask what can we do with prefix? By having it be empty, we're copying everything. Is that? That's correct. But you can do really fancy things in the sense that for example, you could say you can have several rules and you could say, okay, all of the HTML files, copy them to bucket A or bucket AWS and all of the images, copy them to bucket in, I don't know, in IBM cloud. You could have several rules with different filters and the filter that you can use after this and all that kind of thing that you could really filter what objects are going to get copied. If you don't put anything like we're using everything is going to get copied there. Okay. Okay, so this is like the first step that we have. So just going back to our diagram we have set this bucket replication here. So now if everything is working okay, what we should see is that once we upload an image here it's going to be replicated and it's going to be available in our namespace bucket or even in AWS. Okay, quick question. So the photo that you've already uploaded should I see that replicated as well or is it only new things? Yeah, everything that is there. So even if it's old, it's going to get replicated because how it works is that there's like a batch job that works on the background and it comes to life. It examines everything that is on the origin and whatever is different from the origin to the destination gets copied. The timer, at the moment I have to ask this engineer but from what I have seen it can go from, depending on when you actually upload an object it can go from seconds to even five minutes. So I think that five minutes is the top of the timer that you have configured. So really it's a synchronous replication so it's going to be eventually consistent but I also have to ask if there's an actual way to lower that timer or set it to a specific time so maybe five minutes is too high for your use case you can lower that, no, so the replication or that batch background job gets started earlier. Yeah, so as you said everything gets copied and what we can check now is if it's already in our bucket, we can use first thing that we need to check is that it actually got uploaded that we didn't check that before, okay, it's here, the next thing that we could check on the travel that is going to go for that object if it's in this photo album, if it's here. Because it's a namespace bucket, it should just write it out once the replication has happened. It's not encrypted, you can see it, okay. That's right and that's a good point to make. Know that as I just said this ugly S3 client here what they really want to meant is that you don't have to actually go through OCP. If you have a client, you can see the data work with it from an external client, just go into AWS. So at some point where it actually gets replicated we're going to be able to see it here. If, because it can take, well, it's already here. Let's just check. This is the path that the photo album uses that's a little bit ugly but as you can see we have our image uploaded. So it has done this traveling from here. From on premises, it has gone to the cloud and now we have it on our S3 bucket. So just to finish and give it the next step that we could give it is what we're going to do now. No, we're going to go to our AWS cluster. Let me make this a little bit bigger. We're going to go to our AWS cluster. We are going to configure the namespace store and the bucket class and then we're going to configure the photo album to use the namespace store that is going to access the same data. And this is really nice in the sense that you could, for example, this is a CE use case. But if, for example, you had a full failure on your own premise cluster, you could have here a global load balancer or something that would point users to AWS instead of on-prem. If you had a full failure and they would be able to work and have access to the same data and your application would keep working without any interruptions or issues. Nice. So let's very quickly cover that. As we mentioned before I'm going to change context. No, so now I'm moving on to AWS. Here I want to show you just a moment again that we're actually on AWS. So as you can see here, well, you can't see very well but as you can see here is AWS. No, so really our machines have been deployed in AWS and this is the instant name. No, are easy to work and note, for example. So just making sure that we're on AWS. So here what I want to do is again create our namespace store and the really nice thing is that I can run exactly the same command as I just used before. Right. Because we want to use the same bucket. No, we are using the same bucket in AWS. And again, the bucket class can be exactly the same. We could change the name of the bucket class if needed but it's actually quite practical to have it the same because then your deployment, your application is going to consume the same. So bucket class single, perfect. Okay, so we have our bucket class ready and now what I'm going to do is deploy again the application just to show you here in this application the update bucket claim that we're using is pointing to the namespace bucket. So here I'm not using a radar skateway or anything. I'm just using the namespace bucket that I just created and it's going to go through AWS. So let's go ahead and just run the deployment of our application using the namespace bucket. As you can see here, we have the object bucket claim created, let's do as we did before OBC. It's a spam, so let's see if our application is up. Yep, running, let's get the route because it's going to be different from the other one. Let's also open our great photo album application. And as you can see, it's straight away because it has been replicated. This is on-premise and we have this image and we're also going into the cloud and we have exactly the same image sync. So this is really nice because we have a high depth setup with an application that is running on the cloud and also on-premises and the data is going to be synced for us. Okay, one quick question. So this is a one-way replication, correct? Like if you upload something here, it goes, it's plain type, this is a normal object that ends up in the Amazon, the S3 store. It does not, it's a one-way replication so it doesn't make it all the way back to the rat escape way. Yeah, if the thing is that you can do it and I don't know if we have time, I have ready to show it but I don't know if we have time if we don't have time but you could do it. So let's say that you lose this cluster and you want to have this cloud photo album application working. You also want whatever gets uploaded here, not to AWS to be copied to your on-premise deployment. So when you bring your cloud back, you're going to have everything there. So you can have bi-directional replication. Now the only thing that we would need to do is configure the other OVC. We just configured this one, we could configure a replication for this one. So you configure a rule on the non-prem, the off-prem cloud. Okay, all right, that makes total sense. So we are getting tight on time. Yeah, so we can just, I will just set the rule but we won't wait for the object to replicate but just so we just show an example and then we can close off if you think that's okay. So let me just change context to on-premise because our OVCs are there. We can clear here and we have OVC. Policy, project, demo. We need to move to this one. We have our OVCs here and what we can do, just as you said before we edited on-prem, what we can do is edit this one and then add another line here with that I had it ready here with the replication on the other direction and then we're going to have bi-directional replication configured here. And again, we can use a filter there. So if we only wanted to copy back certain things, like that's actually really flexible. Yeah, and as you can see here, the difference is just on the rule that I'm setting. No idea of ways to on-prem on the other direction and the destination bucket is on-prem to end that replication on that way. So if I now uploaded something here that we could maybe use this one, just upload. And we will leave it here and if we have time while we close to get it replicated in that time, we can just check afterwards if it gets here. But I just wanted to mention two quick things before we end and is that this feature of bucket replication has by design some limits that maybe they will change in the future. And one is that you can't use S3 object versioning that's not actually implemented. And the other thing is that it doesn't track the deletion of objects in the source. And what they mean by that is that if you on your source bucket know where you have computer replication, you delete an object, that object is not going to get deleted in the destination. The destination is always like your source of truth where you have all of your things there. That maybe it's going to change in the future. I'm sure that it's going to change is something that is engineering thinking about but just at the moment. So people also know that that's a thing that we have currently with bucket replication. Fantastic, wow. Okay, so we're at time. I will be posting in the description all the repos that everyone needs and maybe some tutorials and more information on MCG. And I wanted to thank Daniel once again for such a fabulous show and we'll have more plans in the future. Thanks everybody. That was fun. Thanks a lot. Thanks.