 Good morning. Good afternoon. Good morning. Hi. Hello. We'll wait a couple more minutes and have a few more people join. Okay, we'll start in one minute. We'll just give a few more people time to join. Let's begin. So we'll start off with a quick update on the tech leads and co-chairs. So I'm really happy to announce that both Shang and Rafael have been elected in as tech leads by the by TOC votes. So they take up their new roles as tech leads within the SIG. Yay. Thank you. And we also have two more nominations. So Nick Connolly is also being nominated to become a tech lead. And also given that Aaron is stepping down from the SIG, at least for now, while she is sitting on the TOC, we are nominating Jing as co-chair. So congratulations to everybody. Hopefully the votes will come through shortly and and we can formalize it, but welcome Jing as co-chair. So what we were going to do today was to focus on the cloud native disaster recovery document. We wanted to review it such that we can, you know, identify some clear actions to to to to work towards finalizing the document. And to this to this aim, Rafael was potentially going to be able to do a demo and we would then go through the document. And the project vineyard team would also like 10 minutes to clarify their project following the TOC sandbox vote. So Rafael, would you prefer if they went first or if they presented towards the end? As you, as it's better, let them go first, I think. Okay. It's okay with me. Yeah. All right, that makes sense. So so project vineyard, the tau tau, do you want to do you want to take the next 10 minutes and cover off a quick update on the vineyard project? For clarity, I raised this at the last TOC meeting to get some feedback from them in terms of why they deferred the sandbox vote. And what they wanted to understand better was how the project fits into how the project could fit into the CNCF and whether the CNCF was the right home for the project rather than maybe the AI foundation or something like that instead potentially. So I think that information would be useful to share now. Hi Alex, I'm not sure whether tau is, hi tau, would you like to take over or or maybe, maybe yeah, my name's Ben Yuan Yu and maybe I will give the presentation. Brilliant. Okay. I will share my screen. Next. Okay. Can everyone see me? Yep. Cool. Yeah. So we have given the presentations at previous SIG meetings and today I will just quickly go over the previous like the key points like in the previous presentation and I will focus on a few questions raised by the TOC members. One is like what's the value added to to CNCF and another is why this project fits CNCF just as Alex just mentioned. Yeah. I will quickly go over the Wanyard projects. First of all, what is Wanyard? Wanyard is a distributed in-memory object manager for data-intensive applications. And it's designed for cross-system in-memory distributed data sharing and we want that to have very low cost as in zero copy fashion and we want to provide out-of-the-box high-level abstraction and we want to support easier IO and integration. And lastly, we want to leverage the ability provided by Kubernetes and to be able to co-schedule data via end of the workload together with Kubernetes. Here is a background white bolster like for processing the data processing on a single machine like in a Python environment. It's sharing data across different libraries could be very very efficient like with actually the zero copy. Here is an example of sharing an array from NumPy with NumPy and PyTorch. Basically, we declare an array using NumPy and it can be easily converted to a PyTorch tensor and that tensor, if you change one value, you can see it's like basically you are touching the same piece of memory. But it is not easy to do so. Like if we have the memory shared across different processes like user-friendly, it's not very easy. It's possible with a component called Plasma from Apache Arrow. That's a local object store using shared memory. But like a user needs to like program manually with Plasma and manage the metadata by themselves or sharing the different like data serialization or deserialization logic. So it's not very straightforward for user to implement and it will be even harder for like for such a data sharing in a distributed setting. Like we have a distributed data structure across distributed across multiple machines. And even if the data leaves memory, it's very hard for like different libraries or systems to share this piece of data. So that basically we want to use one yard and the Kubernetes to solve this problem. And like a real like it's a real-life big data application in Alibaba. And we have many like kind of the similar pipelines in Alibaba running. We observe that like now like the workload working on data like it's quite diverse. And it's no longer possible to process like every kind of workload on a single like system like such as Spark. We can't use Spark to do everything. Like we observe more and more like pipelines in Alibaba involved multiple systems working together. And how can they share data? It's not easy. The most common way is to like dump the data into a distributed file system. And we load the data again. And then this kind of operations takes a lot like have a lot of huge costs called huge costs. And finally applying cross-task optimizations like we want to pipeline on these tasks. It's very challenging. We have to wait for the previous task to finish before we can start the next one. So our vision is to like use leverage one yard and Kubernetes to like as a new way to build future cognitive big data tasks. Here like the data's between shared between different like systems or libraries can be represented as CRDs and they actually stored in one yard. And it can be directly mapped to the respective engines. And in this way we can have the data like big data applications completely deploying Kubernetes and the intermediate data can be like managed by one yard and Kubernetes together. Like it can be as abstracted as Kubernetes resources. And if the data like can live in memory and be used later we do not like dump them to the external file systems. And we can leverage the scheduling like capabilities provided by Kubernetes and to minimize the cost of data migration. And we make the data sharing as efficient as possible. Here is like a brief like architecture of one yard on Kubernetes. Here one yard as run as a demon set on top of the Kubernetes cluster. And every object in one yard is abstract abstracted as CRDs. And for for data intensive applications build on top one yard on Kubernetes they can like ask for some objects like in their task specs and they can like access the data just like the normal objects in their memory. We can map the object directly there. And if there is cross like if these objects like create a shared between different processes we can use a scheduler propping to build on top Kubernetes to schedule like to move the data around and to like not move. I'm sorry. I suggest to calculate a better placement for the next task varying those CRDs. So can I just try and summarize this? Because I think getting the summary right is important for the TAC. So I think the summary is that this is a distributed in memory object cache with a number of integrations with different libraries such that they can natively consume the data from the in memory object cache rather than using say files from a distributed file system. Is that correct? Yes it is correct. Also there's a few like added features on this object store. We have a scheduler propping build on top Kubernetes. And so such that we can for each like a job we can like find the optimal like affinity of this job. So we can like minimize the cost of data migration across the cluster. So and one yard itself actually can can leave in a like separate container like using leverage the container-based schedule system provided by Kubernetes such that it can some kind of isolation between those kind of application and the store. We can also yeah. Right got it. Okay I think I think I think that I think that's that's that's very clear. I think what what we can do is I'll I'll reply back to to the TAC mailing list with that with that sort of high level summary and I will point them at this this sort of 10 minutes recording in case they need additional information. But I think I think that clarifies it a lot. Thank you so much for for adding the extra context Okay. Thank you. Should I like quickly go over the next few slides or maybe I'll just leave the link of the slides to the minutes in later. Yeah. I think if you if you leave the link to the slides in the minutes that would be good. Yeah. So so okay. Okay. Okay. Thank you very much. I just follow up question. Just a small question that these objects these are like read only right these are like not the basic they're immutable right if somebody one of these elements in the pipeline transforms the object basically we're creating a new object and those new objects have to get propagated to the right nodes based on your algorithm where your course course scheduling compute and data right. Yeah. Yeah. Your answer is correct that the object here currently we are only handling emailed by date but then we can handle streams as well. So so that enables pipeline in between those between those systems. Thank you. Brilliant. Thanks again. Okay. I think in that case we should move on to the cloud native DR documents. Rafael do you want to drive? Yeah. I'm going to share my screen. Should we talk about the should we do the demo or should we go on the document? If if if we do the demo I think that would be cool too. I'd really like us to sort of spend a little bit of time though providing the feedback for for the document because I think we've you know we've reviewed the document and we've gone through the deck a few times and I think I think what we need now is is some clear actions to to move forward because I think we have general consensus about that this is a good idea. There are a few things that I think we want to clarify or clean up in the document but I think we have the foundation for something really really good here so we should we should aim to get it to a point where we can publish a draft. Wonderful. Wonderful. Okay. So okay let's so last time when we talk about this I think it was a few a few weeks ago but if you remember we we saw a presentation that was essentially the summary of the longer document and we got to a point where we we covered the theory but I think I thought it was useful to also see all of these in practice and there were some actual questions on I I showcase what the demo would have looked like in the architecture and there were some questions so the idea today is to actually look at the demo and then I'll try to do it quickly and then we go over the document like Alex was was saying so this is a a demo that obviously has to run on some specific products but there's nothing here that that locks you in on this combination of products. The products are there just to provide capabilities and so we use we use Submariner for the natural tunneling between cluster SDNs and Cooker CDB as our workload. Cooker CDB belongs to this new generation of databases if you want to I think of them as as enough spring of the original idea of Spanner which is this geographically distributed database that Google offers as a service but then startups have started providing the same capability with with software that you can install in your data center so Cooker CDB belongs to this generation of databases and and you know I I'm using this because I have operationalized it but meaning I have all automated but you could do the same thing with the other like there is Yuga Byte there is NuoDB there is TKDB there are many other that have the same capability so just to remind I think we saw this slide I also have it in the in the demo deck but just as a quick summary we are trying to delineate a difference between traditional disaster recovery and what cloud native disaster could look like and where the main differences are everything is the recognition of a disaster is automatic and the recovery process is fully automated right and then we want to get to Neo0 downtime so that that is Neo0 RTO and we got we want to have full consistency so we don't ever lose any transaction we prefer to be down rather than losing transactions and so we get zero exactly zero RTO. I'll go quickly over the rest so skip the rest this is the infrastructure that we are going to build that I already have actually built we have a control cluster where that essentially I'm using to spin up these other clusters it's not it's not really mandatory to have it but it with the red dots through it this makes life easier you could do it differently but this RHA ACM is simply a way to spin up other clusters quickly and then we have this cluster which is where our workload actually runs and we connect this cluster via a this is some this network tunnel you can build a tunnel in many ways here I'm using this Submariner product Submariner is is a product that was initially started by project was initially started by Rancher I don't know exactly where I think they want to go to the CNCF I'm not sure what they stand in that path but now all right that is also heavily contributing to it and it's it's a it's a way to build a network tunnel between CNI compliant SDN implementations okay it it builds so all all you need is that some of the nodes are routable between these clusters because they will act as a gateway and then it essentially builds a NIP sector tunnel so in this picture we see cluster 1 connected to cluster 2 connected to cluster 3 but also obviously cluster 3 is also connected with cluster 1 okay so once you build this tunnel pods can talk to each other and can can discover and talk to each other just by creating a connection directly to the IP of the pods obviously you need to build these clusters Kubernetes cluster you see OpenShift but consider it just Kubernetes OpenShift is just a distribution so you can build these clusters you have to build this cluster with non-overlapping ciders for the SDNs but other than that they will it will everything will act like act like a larger network right in this case three network combined into one and in front of it we have a global load balancer here we are running on AWS so I'm using route 53 as the global load balancer again there is nothing you don't have to use route 53 you just need global load balancers so a global load balancer by that I mean essentially a DNS that has some additional features so it can allow it allows you to define multiple backends for a for an entry for a DNS name and can allow you to define some load balancing strategies rather than just you know the simple route around robin or multi-value and maybe also has the ability to actually not maybe it needs to have the ability to to health check the the status of the applications that it load balances so that when applications go down it will redirect all the traffic to the healthy healthy backends route 53 has all these capabilities but many other enterprise great load balancers both in the cloud or things that you can install on premise also have these capabilities okay then I wrote a little operator to automatically program route 53 this is this is an open problem in I think in the kubernetes community the reason really an operator to program global load balancer there is an operator which is called external DNS which works very well if you need to program a DNS to be a simple DNS name you know yeah DNS name for your cluster stuff let's say but when you when you want to when you want to start working over multiple clusters external DNS doesn't have the right capabilities luckily there are other people now working on other implementation and I think KGB if you guys want to take a look is a wonderful wonderful implementation of a global load balancer operator much better than what I did so probably in the future future I will replace this with with that operator which I hope we will grow together as a community into something really enterprise worthy but it's it's a it's a very nice because it's a self-hosted global load balancer so it's coordinates running in actually in these clusters and and they coordinate in such a way that then they can provide the service very nice implementation so anyway moving forward this is yeah what's the load balancer you mentioned KGB it's called KGB oh KGB yeah yeah one question about Submariner yep so when you create a tunnel between two clusters that means all the pods in one cluster can see the pause and like every side piece in the other cluster right and but that basically is the tunnel between two clusters or does it create like a mesh where when you have let's say two tunnels for example in this case you have one tunnel between cluster one and cluster two and one between two and three but does it create a full mesh where all pods can see all all the other pods in other clusters or yeah this is just between the two end points like I said for just layout problems I I don't have a tunnel depicted here between cluster one and cluster three but it is actually there so it's any to any connectivity I think it's creating like more like a mesh and there's in fact a central operator sitting somewhere and providing the information how to connect it to the like what cluster and what subnet it is if I remember correctly it's like a long time long time ago what else so well um yeah anyway in in this deployment there is there is also a tunnel between one and and three right yeah um using it's actually yeah it goes out using the same gateway node but it's essentially conceptor a different tunnel um Submariner as as different support as different transport options this is essentially what you can do with IPsec if you change the transport option to Weigard Weigard there's a piece of it exceptionally nice nicely done as the VPN implementation and with Weigard you can actually do full mesh meaning node every node as an open VPN connection with every other node and so you don't have to do the additional moving your connection from moving your packets from your node where the pod is where to the gateway node to the gateway node of the target cluster and then to the pod to the node where the pod target pod is right it's a it's really an any to any node that you can achieve with Weigard but what I have now is not like that it will go through through a gateway node every connection will go through a gateway node if that makes sense yes okay so one question yeah just one follow-up question so let's say you have some pause in cluster one you have some pause in cluster two how do they discover each other but now that we establish connectivity between them through Submariner like how they how can they discover each other and know for example they offer the same service or not there but yeah so Submariner has a component called lighthouse which provides discovery it's essentially I believe it's coordinates but the way it's populated is by following the new standard that is being published by Sink Sigma the cluster I believe where essentially it's first of all you will have to explicitly export a service using a CRD so you say I want this service to be visible across multiple cluster clusters and then there is a pattern to determine what is the name exactly as there is a pattern to determine the full name of a service within a namespace right so it works pretty much the same way instead of cluster local essentially they have cluster set and then I see cluster set lockup or something like that so so the names are predictable and and that's what you can use to to write you know your configuration files thank you yeah a nice question okay so that's preparation I also have something that is not depicted here but I have a distributed vault deployment here to provide the secrets certificates I should say in a consistent way to all the nodes because uh everything is is over TLS when we build the cockroach to be cluster so this is a I'm deploying cockroach to be each node has three instances for local quorum this is not necessary but if I understand correctly cockroach can leverage that for some of the additions and then obviously we have three node three region here regions here for global quorum because we want to be resilient to the loss of an entire region okay so it's a nine nine node cluster so essentially nine nine pods right um I will now switch to this the browser to just show to you a little bit what it looks like here and this thing is so here is cockroach to be we have nine nodes we have three regions two on the east coast and one on the west coast and as this nice I like very much this feature of co-creativity which show you where the things are on on a map and I have pre-populated the database with a little bit of data you see I have about 10 gigabytes not very much but just enough to to run this demo um and um we can see that um the nodes are here I have three clusters and this is these are the nodes okay this one is slow um so what I'm going to do now for the demo is is to no sorry wrong button is to cause a disaster okay so we're going to take offline the one region we're going to first of all we're going to generate some load with this tpcc client tpcc is a standard benchmark for SQL databases so we in this case we just use it to generate some load but I have actually used it to do a bench performance test on this deployment and we we demonstrated that together with the collaboration with from co-creators labs but we demonstrated that after some tuning that we got 96 efficiency which means with this deployment on on a on a test which was cited with 1000 small databases that's just just implementation details on this kind of test but it's the tpc we run the tpc tpcc 1000 and 96 efficiency it means that it it performs um pretty close to the theoretical you know 100 so essentially it it's a it was a pass so it's it's usable for um highly transactional use cases because that's what tpcc does so essentially updates insert and and select a small amount of data that that was the that was the use case but in this case we just generate load so we pretend there is traffic coming in the in the three regions with by creating this tpcc pod and generating load and then we are going to take down this entire region and our objective is to see that the cluster obviously keeps working and it realizes that there is a problem it self organizes because co-creators has to manage the fault and then and but mainly we want to see that these clients keep working okay they don't uh they keep working and then the other thing that we want to see is when we reestablish connectivity uh the cluster heals itself right and and uh and in the meantime these these clients always keep working we will lose this one uh i believe yeah uh so the way i'm going to cause the disaster is by taking by um isolating the vpc in which this uh this cluster is deployed so even if even if it's a multi zone multi az deployment once we remove connectivity to the entire vpc nothing can go in or out and so from the perspective of these two remaining clusters they will send connection and and see the packets being dropped so they get they get a time out and that's probably the worst condition when you know when you because it's easier to manage an error if if you get a connection refuse or if you get some error from the from the other side but if you don't get anything you really don't know what's going on okay so we are trying to generate the worst-case scenario for a fault um questions okay so i'm gonna go and close the disaster let me see uh sorry one small question so what's the network latency between the like once go through the submarine was network latency between the sides right let me answer this please showing you so submarine has this nice table here in which we'll tell you it's observed natural latency between between all of the nodes so obviously it's a matrix uh and um um this when you see this 70 is is is between nodes that are in the east one node is in the east and the other one is in the west and this is consistent with what uh with the latency that amazon publishes between their own regions um meaning meaning submarine really doesn't have a lot of a lot of uh oh um you know latency oh um and then um what i understand of this product is every transaction always needs a round trip so what we can expect is at least 150 millisecond for each of latency for each transaction but obviously you can have multiple transactions at the same time and this is pretty much consistent with what i got when i was doing the performance test like it was something around between 150 and 200 milliseconds yeah so um if uh rough out so if i understand correctly you previously said that the efficiency you get it like 96 percent in this situation and basically you're saying that with the network latency of 72 milliseconds it's going to be like even like cockroach db running the workload on this situation is going to be the performance of 96 percent of like for example zero or near one milliseconds if that's uh uh okay so if you're asking um if we didn't have so much latency like if all the all the all the nodes were on the east coast like or we were using another region closer would would the would the performance test have been better yes i think so yeah it couldn't be better so about 96 uh like a percent of efficiency that's so what we are comparing to at the time yeah that i i always have a hard time explaining tpcc because i don't fully understand it myself but they what they're trying to say is um we so they inject thing time in the in the in the way the the load is generated trying to emulate actually users doing something and so based on the size of the test which was 1000 in our case and the duration of the test they calculate the theoretical number of transaction that you could execute if the if your system was perfectly efficient but considering the think time okay and then they they see how many transactions you actually execute and and the efficiency is the ratio between these two numbers right so um so that's not obviously if you didn't if you did remove the think time your your you probably could get um you know more transactions in the unit of time but because there is think time that's um that's the that that that's how you can define efficiency and i think that the purpose of the of this benchmark is the challenge to the to the database vendor is to pick a size of the test like maybe tpcc 1000 you can do the pcs 100 okay pick a size and then tune your software and and and find the the smallest hardware that you can that you can build or put together to to contain the cost to get to a good a good enough you know efficiency ratio and then and then yeah and then people can can decide which that they basically want yes um yeah i think i understand more now so it's basically saying this is the this is some hardware some environment you can pretty much run the database without like has any can we call it like major performance issues or stuff but basically you can run it as in production environment but the but maybe the absolute performance is not really that high compared to say if you're like of course if you include the same time and which is determined by the network and maybe the the disk IO ops as well no it's not determined by that the think time is really embedded in the client to generate in the workload and it is addition it is in addition to any actual wait time that is introduced by having to process the transaction over the network for the disk right so so that's why that's why you get you don't get 100 it's because there are those additional wait time that you that you need to that the users if you know we are pretending to generate load as a user would do that the user has to wait for right um so so just just to make a point here it's we're not trying to say that these database or this kind of deployment are good any you know for any use case if you need very fast latency for your transactions probably this is not good deployment because because there is that the distance across you know across space then and no no product can solve for that right yeah but um but if you're just if you're okay with the latency and you're looking for scalability so volumes at that speed right um or you're just looking for resiliency across for a disaster maybe these these are good solutions although although I guess the same technology can apply say across availability zones in the same region or something like that right if you're if if if latency is a concern too you can you can still get the failover capabilities right right and that's this is the the kind of latency you will get in the same az but obviously now you are resilient to the loss of an az not not the loss of a region it's it's obvious yeah obvious consideration yeah okay so let's go and generate and create the disaster so I need to be in the control cluster and I need to copy this okay so this is going to if you can read here it's going to do a deny of everything in in this um in this vpc make sure everything is in place you're doing you're doing a live demo and you haven't made an offering to the demo goods yet uh yeah I don't know I was doing something else before and I probably don't have the right context that could be the reason so let me see I need to set this as the context and let's try again just this one why is it saying user must be logged in oh oh okay okay got it um I'm not logged into the cluster to the cluster I don't know why I just did it but to the control cluster to control the cluster so let me do this I'm going to log into all of the clusters again hopefully this works okay so now I'm going to block traffic oh um no I forgot to start this guys I'm gonna start sorry I forgot to start the workloads so obviously we can't do it without that it wouldn't be a complete demo so for some reason I've lost the pause yeah something has happened here and unfortunately I already did this one so I can't start it on the actually let me do this restore connectivity um it should all be good so we I'm gonna start the load also on the third cluster there is a character here it shouldn't be here well okay so I have some networking problem with the third cluster probably because of that um but but these two are working so this is our generating load on the cluster number one and two I don't know what happened here yeah but okay so let's pretend that the disaster is already has already started I'm going to now start actually make sure we are we are in a disaster situation I apologize for that um okay so what we what we expect to see um so first of all you see that there is um load on this the pod is running inside the cluster but I'm just telling the standard output to here so it's it's running all of these you know delivery new order transactions and uh with with some um it was actually a very good performance that's something strange there but and then if I go to the overview here we should see that now the load balancer may take a little bit to realize what's going on I hope I haven't broken it this this is a demo that is not going well you see the the cluster is not ready it does that sometimes when it rotates the certificates it's uh it's a problem in accuracy but the unlucky thing is that they were rotated right now so this is how I can fix it but uh we won't have I think time to see the entire demo and it was it was okay before before we started the meeting because I checked everything okay so let me try to fix it on the fly that's raffaello you said rpo zero rpo zero I know I know sorry I joke you know the live demo live demo is always hard we always has when we do live demo which has like what do you call but you know this demo this demo is um I don't want to say famous but they asked me to do it like every other week this is the but but these certificates are in here I said I said the the lifetime two three two weeks so maybe maybe I can just support raffaello here because I was a previous redacted employee and we deployed that in production for a dutch customer and raffaello can can can see that with Andy Bennett with the temp at redacted for that customer so so this isn't working situation from open shift to open shift to open shift um the only caveat here is that it's really having some issues linked to the cni provider so there's a deep dependency on that so um at the beginning when we started the project we deployed it uh on open shift four four I think which was still using sdn and then we were moving forward five from six and we we had some issues when we tested with obn at the time it was was not yet supported from submarine or something you're talking about the submarine right correct yeah no but we what's happening here is the pods couldn't come up because the certificates were expired so now you see they're up and I bet if I go here um I should we should be able to see the console all right um and you see it it's trying to repair some of the nodes but okay so now this guy should start sending more yeah so it's more significant to see we start to see something here and I might be able to start this one also since we yeah okay so we are at the point where we could start the demo right now so if you guys can stay uh on a little bit longer I'm gonna do it um so we want to generate the fault situation by doing this okay okay so now now co-currency b should realize pretty quickly that uh three nodes are not reachable anymore we should see that these are still working right um but the third one obviously the connection is severed so we don't get the tail of the standard output um but these they they continue to work with you know these random numbers for the for the latency um and summary here as you can see sees that there are three suspect nodes and if we wait long enough they will become dead nodes that's just it's an it's internal way of managing it now um we are gonna repair fix the world as somebody said during one of these demos uh I would usually have waited a little bit longer but but the the important thing is to is to notice we are simulating a big disaster because it's the loss of an entire region right and we didn't have to do anything for for for things to keep working and we didn't you know if we went to look at the data we wouldn't have lost any any data so now I'm I'm establishing the connection um and what we should see well now first of all um the submarine tunnel has to heal we don't see that but um the tunnel is being established now and then as soon as the tunnel is we establish we will see that also cocosby will heal and it may have just done it quickly but usually it takes a little bit longer and so both the and managing of a disaster but also when when the disaster re-enters and normality goes goes you know comes back also restoring normal configuration it's all managed by the system and you could interiorly sleep through a disaster uh although maybe that's not exactly what you want you may we want to get alerts but still if everything is fine you don't have to do anything so that's I think you know forget for a minute all the implementation the products that we have seen but the this model is what what I would like as to promote as a team right as the as the cnc of six storage and then um you know it's essentially saying that we we want to set a bar for for this new generation of products to reach that level that doesn't mean that if you should not there you cannot do the usual you know more traditional dr with active passive right um but that was that that's that's what I would like to do together all right thanks thanks so much raffaele this this this was this was really great as xing mentioned it was very brave um I think I think we'll use the next meeting to to maybe go through um some of the actions in the document in the meantime if people can review the document and um and provide some feedback I think that would be that would be released for or at least familiarize themselves so that we can we can work out um we can work out any future actions together for convenience everybody I've included this in the in the meeting minutes but I've created a short a short link to to the working copy of the document raffaele I've made copies so that it can be shared more widely yeah that's perfect and I'm gonna be working on that one based on your initial suggestions I'm gonna really start working on that awesome as well thanks everybody um and we'll meet in a couple of weeks have a great rest of your days bye thank you bye see you bye good bye