 Hello everyone. We'll just wait a couple more minutes. First goal of the new year. So happy year everyone. I have to run and grab my laptop charger. I didn't realize my laptop was so dead. I will be back in like 20 seconds. No worries. Aishin, happy new year. Hi Alex, happy new year. Such a thiz with everything that's going on at the moment, but you know what I mean. Hello Aruba, happy new year. Hello. I'll give it one more minute and then we'll have Raphael talk through the amazing documents that he started drafting. I say started drafting. There's like 20 pages there. It's pretty amazing. Hey Raphael, good to hear from you again. Hello. Hi Arin. And Alex, thank you for the kind words. I have severe internet issues today, so I'm going to try to save my bandwidth. So I'm on the mobile phone for audio and but I don't know about the video, how long it's going to hold. So I'm going to try to share my document when we start, but in May the connection may break. That's all right. I think I will put the link to the document in the chat window so people can all access it directly anyway. All right, it's six minutes past when entry starts. So at the beginning of December, before context at the beginning of December, we had a discussion with Raphael and we decided to take forward and build out some more information on cloud native disaster recovery. Raphael has been exceptionally busy and has put together a great document. I think we were expecting a skeleton that we could fill out, but there's a tremendous amount of content here. So maybe Raphael, do you want to take us through the documents and we can kind of discuss and figure out what needs to be done and where you need to help, etc. And is there a way for me to share my screen? I may not have the mission to do it. Okay, found it. Like Alex said, in the last meeting we decided to go ahead with talking about disaster recovery in a cloud native scenario and I was tasked to create an outline of the document, but I decided to go ahead and also fill up some of the content because I already had some of it written down in various articles that I had published before and some of that was in my mind. So I just wanted to do a breakdown while it was there. And so this is what I have. Obviously it's a draft. It's draft in many ways, both in the structure and in the actual content. And so my hope is today to talk about it a little bit and then that you always go and I think the connection already dropped. So maybe someone else can share this. Can you still hear me? We can still hear you. Okay, good. So yes, and my hope is that you will all read it and provide feedback. Okay. Do you all have a commenter and anyone? I don't know how to share it with everyone in this in this SIG, but as you ask permission to access, I will share it with you. And we'll give you a commenter ability. So you will be able to add the suggestions or comments. And if you would like to add your feedback that way, I will incorporate all of the feedback that makes immediate sense to me. And then discuss and follow up with you and discuss all of the feedback that for me is not clear. And then if we can work that way, so being directly on the document without having to meet, I think we can then quickly converge to something that we all agree and we'll feel like we can share. So that is for how I would like to collaborate. Now, on the document, I know I can't share, but hopefully you can follow it. Just follow on the left side. Let's take a look at the structure. So there are three main areas. One is the first chapter is about availability and consistency. Do you want me to share the document on my screen? Yeah, go ahead and share, but I won't be able to see. Okay, let me try to connect again. Maybe I will. But yeah, it's really bad. I don't know what's going on today. Okay, thank you. Yes. So the first chapter is about availability and consistency. And here is I'm trying to give some definitions about this concept plus others that will be useful later in the document. And are relevant in the custom context of disaster recovery. So we talk about failure domain. And then we talk about availability, consistency, the cap theorem, which creates this relation between availability and consistency. And then what we mean by disaster recovery. So for me, and you will read it in the document, but the main takeaway is that when we talk about availability, we are really asking the question given in a failure domain, what happens to my workload, if one component in that failure domain fails, one or more, but generally it's one, right? It's a HAO one. Instead, when we talk about disaster recovery, we are asking the question given a failure domain, what happens if all of the components fail at the same time with a single event, right? Obviously, in that case, to be still able to service request, you will need to have multiple failure domains. So it's really, it's a different question. But today, there is a lot of confusion. I think in my discussion, at least what I've noticed in my discussion with my customers, they tend to overlap the concepts because there is an expectation that disaster recovery behaves like a HA in the sense that there is no service discontinuity, right? Which is what we're trying to oppose with this document. We're trying to create a guideline on how to reach that level of service. But still, it is, I think, important to keep those two concepts separated. Hey, Rafael. Quick question, therefore. Do we want to put a definition of DR in there in some form? Yeah, there is. One of the things, and let me explain why I'm trying to discuss this, about the differentiation, I guess, between high availability and disaster recovery. Because I think one of the aspects, the way we're kind of describing it in this document is that we are talking about multiple instances of something, right? Say, you know, a database or something that or some sort of system or whatever it is that we're thinking about here. And, you know, the, that's not 100% clear in my mind, is where, how blurry does that line get between HA and DR when we're considering certain cloud-native technologies, right? So, I'm kind of thinking, if we're talking about something like, you know, Fittes or CochrochDB or something like that, where you kind of have multiple instances and replication. But those multiple instances are serving both the purpose of HA, but also the purposes of DR, right? I guess there might be lots of opportunities where there might be overlaps as well. Yeah, yeah, that's what I meant when I said I talked to customers that are sort of overlapping those two concepts in the same way that you are describing. I'm building an architecture that serves both the purpose of local HA and global DR, you know, automatic DR. And then, and yeah, so I can, with a single solution, I can address those two problems, but for me, they're still two separate problems. We can, however, we can add, and I've debated within myself whether we should do it or not, we can add a discussion in this document saying where we see that possible overlap and how that plays out. But right now, I am, the way this document is written right now, we try to delineate a clear distinction between HA and DR. And for me, the simplest way to explain that distinction is the way you ask that question, right? Relative to a failure domain, HA is asking the question, what happens if one component fails? Anyone, right? And then instead, DR is asking the question, what happens if all of the components in that failure domain fail? So in that way, you have a clear distinction. And the only way, the only, really the only way where you start making, they may mean the same thing, HA and DR is if you are working with different failure domains. So for example, let's say you have, you have three geographies and instances of a workload in those geographies. When you ask the question about DR, you're assuming that everything in one geography will fail. So your failure domain is a geography. But when you ask the question about, when you try to ask the same question and call it, and you call it HA, what you're really doing is you're saying, I consider my failure domain, the three geographies together, and I want to know what happens when one geography fails. In that case, HA and DR can mean the same thing, but you have subtly changed the failure domain level. I hope I'm not confused, everyone, but that's what happens really in the minds of the customers that I talk to sometimes. So if I may make a suggestion, Rafael, maybe it would make more sense to take that section on disaster recovery and move it up above HA. So we address it first. And then when we dive into talking about HA, we can talk about the correlation and non correlation and differences between you know, traditional ways that customers view these things compared to the cloud native approach, kind of like we did in the white paper, Alex. Because then, I mean, what's confusing is it's called cloud native disaster recovery, but it's the last piece of this. Maybe it's just reordering will make more sense to get what you're trying to get at across Alex. Does that make more sense? Because it brings a definition up to the start of the document. Yeah. So in the disaster recovery section that is highlighted right now, we just talk about general disaster recovery definitions. It's not the solution for cloud native disaster recovery. That is only at the end, but I can move the, yeah, we can rearrange the disaster recovery section and definition section and put it just before or right after the availability one. So people can mentally compare them immediately. I think it's a good suggestion. Yeah, I think that makes sense too. I guess, for me, I think describing it in terms of working across failure domains is a good way of doing it. The second distinction, which I wanted to clarify because this is kind of important too, is are we talking about systems where, so actually, let me take a step back because I'm not articulating this particularly well. So in a more traditional DR kind of hypothesis, right, we would be talking about having completely separate instances of a system, whatever that system is, whether it's an application, a database, whatever, and those systems would be made available across different failure domains. And those failure domains could be data centers, they could be racks, they could be server rooms, they could be geographies, whatever those failure domains are. And that makes kind of like a lot of sense. But I think what we're seeing in the cloud native world, and this is why I'm mentioning it, is that what we're seeing is the spread of components across failure domains to effectively have much more of an overlap between HA and DR. And for me, that is also kind of like an important differentiation. So if the definition of DR is being able to recover a system from the failure of, or from the outage of a failure domain, right? The other differentiator, which might need clarification is, are we talking about completely separate instances, or are we talking about, or can we also talk about a single instance that has components which are spread across failure domains? I don't know what it means to say a single instance that has components spread across failure domains. Because those are multiple instances for me of that single initial entity, whatever that was. Okay. All right. So let me try and explain. Let me try and explain. So imagine you have a database, and your database has maybe a primary and a replica copy, right? And those primary and replica copies are spread out across different failure domains, right? But from a management point of view, from a control plane point of view, they're being administered as a single instance. They just have two components, primary and the replica, which are spread across different failure domains, right? That is different from, perhaps say, an environment where you have two separate database instances, and you're using some sort of process, for example, to keep transactions in sync, or you're doing some storage level replication or database level replication, but they are two completely separate instances and managed separately, right? You see what I'm trying to get at? The difference being that the management layer is seeing it as one instance or as multiple instances. So for example, in the first example, where you have a primary and a replica, and they're tightly coupled and seen as one instance, a configuration error or an error on one side can easily cascade down to the replica, right? Whereas if they are two completely separate instances, you have like separation of those domains, of those failure domains. I think I followed, but don't me understand how you want to, because we can, you know, it's workload. There are tons of state of workload, and they all have quirks in the way you can configure them, and options, let's not say quirks, but options. So what you have described is maybe an option of some kind of workloads, but how do we generalize that? Because we don't want, you know, at least I was trying to be very general with these concepts. And yes, there are some databases that can do master slave, right? Or workloads that can do master slave, and it would be a vice for you to put the master and the slave in different failure domains. But I'm not sure how that comes back into this document. Are you saying that this could be a way to do AHA? And then having a separate concept, a completely separate instance is the way to do DR? Is that where we're going? Well, so what I was thinking was that I'm seeing, you know, in customers that we're working with, I'm kind of seeing two specific patterns emerge. So imagine you have, imagine you have a storage system for the sake of the argument, right? I am seeing two specific patterns. The first pattern is that a customer is installing two separate storage systems, one in each failure domain, and is using some sort of technology to replicate data or to make the data available across those two separate systems, right? And if a failure domain has an outage, then the second system is completely independent and functions as separate units. But I'm also seeing a second pattern where they're installing one storage system, but it just happens that that storage system is, for example, spread out over multiple failure domains. So for example, you know, a storage system that can do replicas or erasure coding or something like that, but it's actually just spread across multiple failure domains. So now you have kind of like a single instance across multiple failure domains versus multiple instances over multiple failure domains. So I was kind of bringing this up because what I'm seeing is a lot of the cloud-native technologies, you know, and I'll mention, for example, you know, Reconcef and Vitesse, for example, tends to favor a single instance spread out over multiple failure domains. Whereas other technologies, you know, like I'll give, say, Postgres as an example, you kind of see that as being implemented as multiple instances over multiple failure domains. And it is certainly different, but you know. No, I agree. I agree. So in this document, I am focusing on the single logical instance, single logical workload entity spread across multiple failure domains. The other one, the other option, maybe I need to understand more, but I think it's, yeah, I'm not sure exactly how to model that, but if we feel we should talk about the other option, I'm certainly open to it. It is in a way in the appendix, but I think you have a more comprehensive view of the other option. But for me, in this document, I'm always, I'm always, in all the examples I'm thinking about, there is a logical entity and we just discuss how to make it highly available, right? And the resistance to disasters, but it's not about keeping in sync multiple entities. Unless you go to the traditional disaster recovery strategies, which is, well, all of them are doing that, are trying to do that. But in the cloud native, we, I think our, at least my argument is you should pick a software or a product that can do that. That is, you can deploy as a single logical entity and it will spread across multiple failure domains. And that the question obviously become how far you can go with the failure domains. Like, can you do geographies or is it just local? Because some of these workloads have latency issues. But yes, that's the thesis right now of the document. That's where the document is going. So we can obviously re-discuss if that is what we want to say, but that's what the document is saying. Rafael, Rafael, I had a question about scope of disaster. So I think right now it looks like your disasters are benign faults as opposed to malicious attacks. Is that correct? Right now the definition of disaster is everything that is in a failure domain becomes unavailable for a single event. So there is an event specified that takes out the failure domain. And you choose what the failure domain is, but generally speaking, it's considered, when you talk about disaster, people immediately think the data center is a failure domain. Yeah. So I guess I'm looking at security compromise as a disaster either potentially in scope or out of scope of the paper because your storage solution is one way to create resiliency against attacks. I think that's a good point, right? Where somebody might use the R capabilities to protect against a security issue or to protect against, or even to protect, say, against human error, for example, I imagine, right? Yeah. I have customers that do that, but when you do a theoretical approach to these kinds of issues, it doesn't matter why the disaster came to be. You could have a virus that spreads across a data center or you could have some proof that somebody is compromising your DMZ in a data center but not in the other and then you shut down the data center. It doesn't matter why. The point is at some point we lose connectivity and that's really what our software should be able to detect and react to if we want to do cloud native disaster recovery, meaning the application as soon as connectivity is lost to the other peers of this cluster of logical workload, that workload is able to reorganize itself and keep working and keep serving without human intervention. So I mean, I'm totally fine if we want to list example of disasters and have a security in it, but it doesn't change the rest. It shouldn't change the rest of the conversation, right? It's the trigger of the disaster shouldn't change how we manage the disaster. And tell me if you disagree. I think Alex's one instance versus two instances kind of highlight that to some degree in my mind that a disaster in a single instance is different than a disaster with more layers of independence and that independence could be the journaling or other security keys or different aspects of the storage subsystem. Be careful there. I'll be careful there because if you set up a multiple instance, multiple logical instance kind of scenario and you want really fast recovery from a disaster, it means that you're synchronizing the data, right? Very continuously synchronizing maybe or a synchronizing but with a very little delay. And then so if your fear is my data was compromised by an actor and I'm now switching to the other side where my data was not compromised, then that's that conflict with the requirement of having very timely synchronization. And that's the only thing that is being synchronized anyway with a single instance. Data is the only thing that has been synchronized between these instances anyway when you have the single logical instance. So the risk that you're running I would say is pretty much the same. Makes sense? I think attacks are an interesting concept but it could be a rabbit hole if you get into like a sophisticated enough attack that can poison disaster recovery. How do you protect disaster recovery itself? I don't know if that's the scope of this document or not but I could see that being an important topic. I'm taking note of this anyway because I want to know about it. I'll provide some feedback to the document. I mean security is an interesting angle. One of the specific angle actually I was thinking of between the multiple instance versus single instance is also more simple things like human error for example. So for example if you are replicating say transaction logs across two different instances on a database then if somebody makes a mistake on the primary and say drops a table the drop doesn't have to get replicated for example. Whereas if you're working with a single instance across multiple failure domains a human error kind of takes out all the failure domains at the same time. So those were kind of like some of the things that I was going to suggest that we highlight. So we can just say look there are two ways of doing this. Multiple instances with multiple domains or a single instance with multiple domains and there are some pros and cons between the votes and maybe have a short table. So I can throw that together and we can review it for next time. Yeah. There's also white paper by the sixth security. Are there anything in that white paper that we can reference in the stock? So Alex I don't know if you reviewed the latest of the sixth security paper. Yes I'll have a quick look and see if there's something which we can use there. Okay. I propose let's table this for now. We bought Alex and I took note of that so we will the feedback is not going to be lost. Let's not use all the time on days I want to talk about the other three sections of the document just briefly. Yeah. So the first one as we said is about defining these concepts about whether it be the consistency and disaster recovery and all the other reasonings that we need in the rest of the document. The second section is about describing stateful application for the pieces that are needed for our discussions again. And the argument here is that all stateful application regardless of what they do have to solve a difficult problem when they are distributed and that is to keep the state in sync. So the argument of this section is in the end with the regard to availability and consistency all stateful applications are doing the same thing that they have to solve the same problem. Maybe they solve it in different ways but they are actually solving the same problem. So we can model those stateful applications. We can create a logical model of a stateful application that applies to all of them. Of course when you actually build a stateful application that model doesn't hold because you have to optimize, highly optimize, right? But in the logical model is that there is an API layer so that could be the SQL layer or could be messaging layer if it's a storage type of stateful application is the block device protocol or the file system protocol. It's a way to talk to the application. And there is a coordination layer and then there is a storage layer. This paragraph is kind of similar to what the storage CIG has already published with just the addition of the concept of the coordination layer. But the API layer and the storage layer were already identified in that document. And then I'm adding here the concept of replicas and partitions and it should be self-explanatory but read some of the considerations regarding replicas and partitions. Replicas is a way to obviously create HA, high availability for a workload and partitions is a way to scale by partitioning the data set, right? And then you can use them together to create a highly available and theoretically unlimited scaling workloads, which is what modern products like OCRCDB and UGByte and TIDB, it's what they advertise that they can do. And if you try them, they can actually really do that at least relative to the hardware that I have to my disposal. I can see that they actually scale essentially linearly. They don't have performance loss because you create more replicas due to coordination. So anyway, going back to the structure here, so you have replicas and partition. And so if we go to the last paragraph where I say putting it all together, actually the mind is calling there. The idea is that you have these instances of replicas that will be coordinated to stay always in sync and then you have other partition of the data, which may have multiple instances, right? And sometimes you have a request that requires the cross-partition type of request, in which case you have to coordinate between partitions. So the important thing here is, the important takeaway is that we need two kinds of coordination protocols or consensus algorithm, one to coordinate between replicas and one to coordinate between partition. And the job is very different because between replicas, it's about doing the same thing. All the replicas, I have to do the same thing. Between partitions, it's about doing a different thing. Each partition has to do a different operation to carry out the transaction. So that's important to understand. And the other thing is, unfortunately, there is a lot of confusion in the names that each workload uses for calling the concept of replica and partition in their own jargon, in their own product. So that's where I want to do some more research for this document and actually create a classification of all the common workloads and products and show how you can map what they call a partition, what they call, for example, access search, I think, called index, what would be in this document called a partition. And I map all of these to showcase that really all the workloads can be brought to this model that we are talking about here. Then the third section is about the consensus protocols. It is similar to the section that was in the appendix in the original, in the starting SIG paper that was published, but it's a little bit expanded. And here I say that for replicas coordination, there are specific consensus protocols that fit better for that job and they are rough and impact us. So they are the consensus protocols that are based on leader election and in which all of the instances that participate in that transaction essentially have to do the same thing based on a log of event. And then there is the consensus protocol between partitions and that's where two-phase commit and three-phase commit are better fit. And the other thing that I talk about in this section is the fact that you should only trust proved consensus protocols algorithm. And then if you scroll up a little bit, this concept of reliable replicated state machine and reliable replicated data store, this is some excerpt taken from the SRE book. So very, very interesting reading, but the general the gist of it is that this problem can be generalized and has been theoretically solved by a set of paper in the academia where they approve that you can build a machine that will replicate the state, whatever that means for your particular state for workload in a reliable way across multiple replicas using a leader election type of consensus protocol. And so they give you a mathematically proved way to do it and in fact, so this I think this kind of layer in software will be at some point generalized so that people can more quickly build cloud native style workloads where they just have to define the API and the rest is already taken care to a certain extent taken care. But anyways, this is just to make the point that it's possible theoretically to build this kind of workloads. And at the end, if you can scroll down a little bit, at the end there is a table where I have classified some of the workloads common workloads that state for workload that we that I that we encounter and they are classified by the consensus protocol that they use to sync their replica. And then the consensus protocol that they use to sync the partition if they have a concept of partition, right, because partition is not necessary, right? For example, etcd does not support partitioning the data. So all of the copies in etcd, they have the entire data space, right? But other other other databases do like when you want to scale to a larger ability to manage a larger data set. So it's interesting because I mean, at least I found this research interesting because it maps nicely all the theoretical concepts that we have talked before to actual workloads. Even though they don't, it's hard to find this information, they don't advertise it, but this is I think is a way to classify workload and put them all on the same level and rationalize their own kind of way of calling things. It should immediately give you an idea of what the workload can do with respect to the problem at hand, right? So with respect to I availability and disaster recovery, it doesn't tell you what has, you know, what the API does and we don't care about that. But we can we can immediately immediately see that what we can expect in terms of behavior when we when we want to set up an available deployment of these workloads. Then so that that concludes the consensus protocol section. And then the last section is where we essentially give our proposal of what a cloud native disaster recovery study should look like. Okay, so I expect here we will we will discuss for a long time. But my proposal is that one should pick a workload that can be spread across multiple availability zones as a single logical entity. And then and then let it do its job, right? It's going to have to be written to work with the concept of cap and to have the concept of for sure copies and maybe even partitions, right? And then the idea is when the work when in this picture when a data center goes down, you will have some level of global traffic manager that detects that and doesn't send traffic to the center that went down and will send the traffic to the remaining data centers. And then you will and then the workload will self reorganize and will keep serving requests. So so what we want to be able to say is, in my opinion, is the right way in cloud native scenarios to the disaster recovery is to try to achieve zero RPO and zero RTL, which means zero data loss during a disaster that is RPO and zero downtime during a disaster and that is RTL. And so here there are some very high level. So there is a high level architecture now. Now this should look like, right? And then there is some high level considerations on how to do this with with Kubernetes. So if you scroll down, there are some some consideration and my conclusion is that to achieve this kind of capabilities, zero RPO and zero RTL. And keep in mind the traditional enterprise they dream, traditional enterprises they dream about having zero RPO, zero RTL. For them, disaster recovery is an incredible pain, which means that they have to do. They cannot treat disaster recovery as an HA event, right? Disaster recovery is a human decision is not autonomism managed by the system. It's a human decision that there is a disaster and then a lot of manual processes that take place. They have to do exercises every six months, I think those that actually do those exercises and they're very, very painful. Here we are telling them there is a new way to do this kind of things. You need three data centers and you need workloads that can be deployed that way. But then if you do that, you get that disasters are managed as HA events, essentially, and that the reaction is the reaction to a disaster is managed by the system. Aplenums, you don't have to do anything. There is no manner of intervention and there is no manner of intervention also to, you know, what it's also very painful for them when the when the data center that where that was lost, when the data center that was lost comes back up, restoring everything to the right to the normal to normal operation is as painful as as managing the disaster sometimes. In this case, there is no human intervention when the disaster happens. There's no human intervention when the data center comes back up. So it's a very, very desirable situation to be right. And that's, that's in my opinion, that should be our case is that we propose people to do things this new way if they're trying to do cloud native cloud native, a cloud native approach to disaster recovery. And the surprising thing for me here, the surprising discovery is disaster recovery is often very much associated with storage. People assume that the solution to disaster recovery will come from storage. In this case, really, the solution that the capabilities that we need are in part of the state of workload, okay, that's to be built that way. That's to be able to be deployed in that way. But then the other capabilities that we need are really capabilities that come from networking, more than storage. I found that insight interesting. So I'm sharing it. Well, I'll stop for a second just to see if you had shed input here. I like where this is going. But I think one, one sort of thing that did during alarm bells is kind of, you know, recommending that cloud native is has zero RPO and zero RTO. I mean, that is, that is a very big statement. Yeah, I was expecting some. That might need some discussion just because of the, just because of the, you know, the typical expense of doing that. And, you know, the RTO and RPO are not necessarily like the more cloud native you are, the more, you know, technically, you're able to achieve zero RTO and RPO. But that doesn't necessarily mean that you need to do that. Or that in fact, that is right for you, right? Because that's also arguably the absolute most expensive solution. It is going to cost you a lot more to do that. So shall I tell you agree that that is the traditional, let's say lower, that the more, the more you want to achieve, the more expensive it is. I'm challenging if that is still true with, with these new technologies. I think it's more expensive, but not extremely more expensive. But still, I agree with your argument. We should not say that in cloud native, you can only do it this way. And that's, I'm actually trying to say that way. I'm trying to say with cloud native, you can aim to zero RPO and zero RTO. And this is how you would do it. But there are other options, right? And in fact, in the appendix, I'm listing the other options. So if you, if you don't mind Alex scoring to the appendix, but it may need some rewarding, I agree with you, it may need some reward. But the other in the appendix, I'm discussing other options, right? Which I call the more traditional disaster recovery options. And the point here is, and again, may need some rewarding, but the point here is you can still, even if you, even, you know, even in container native, or cloud native, you can still do the disaster recovery approaches that you're probably likely doing today, right in your traditional data center or pre-cloud data center. And here is what they look like. And here is some considerations on how to implement in cloud native. And there is some, some specific consideration on Kubernetes. But my, so they're still absolutely possible, right? But I would like, I would like our document to say, but still we, we prefer, we think you could, you could achieve VRPU and VRTO. And that is, that would be the democratization of this high level, you know, high performing way of doing disaster recovery. Well, I understand what you're trying to say, but I, I still strongly feel that what we're saying is the cloud native technologies enabled you to achieve that in, in ways that, you know, were, were, were previously very hard to do with, with, I guess, traditional disaster recovery strategies. But I'm, I'm struggling to say that we should say that's the only way of doing it, or that's the recommended way of doing it. Because, you know, we're, we're, we're specifically open to different options for different requirements in the, in the storage white paper. So for example, you know, things like eventual consistencies in, in data systems, for example, are, are perfectly reasonable compromises to make if you want performance, for example. But eventual consistency also means that, you know, zero RPO is impossible. But that's fine, because, you know, people can make these compromises. And, and we kind of discuss those different options and those different attributes in the white paper. So, so I think we need to, we need to be careful not to, not to kind of present this as this is the only thing that you can do, or this is the only thing that you should be doing, because, because I think that, that would be problematic. I am with you. I'm with you totally. So I agree. And if the words don't come across that way, we can certainly fix them. Yeah, we can, we can change from we recommend this to, to, to something like the new within this new cloud native technologies. This is now enabled. And it's possible and you will do it this way, but also all the other options are still available. I'm totally fine with changing that. It should be, it is a way it should be so compelling if you to try to do RPO and RTO and it's so relatively easy now that I think we're going to achieve the same effect, which is my opinion is that push people to start using this new approaches. I agree. I agree. That makes sense. Well, yeah, can I, Alex, can I give you to do to read those final section and see that we need some rewarding? Definitely. Yeah, I will, I will provide some feedback over the next few days. Perfect. And thank you so much for, for, yeah, thank you so much for all the work that you've put into this and, you know, just echoing what Raphael just said. Please provide, please provide feedback. That would be great. Excellent. Well, thanks everyone for, for, for joining the crawl. And I look forward to the, to the next set of updates and the next, and the next meeting. Have a good rest of your day, everyone. Thank you. Bye bye.