 All right folks, so looks like we've got about 17 people on the call and I bet we'll have a few more stragglers that come in Today we've we've got a great presenter Luke Mariston It's gonna help us out and present on the one of the latest and dirt durrs with the Dot mesh and then after that we've got kind of an open agenda So we can either end the meeting early or we can figure out what to chat about And I've got an update regarding the the cube con sessions that we've been working with so at this point let me pass it over to Luke and Let's hear what he's doing today Awesome. Thank you Clinton and Hi everyone. It's um great to see so many people here Nice some of the names So so great to see you all I've been having some Connectivity issues. So if I drop out, please tell me as soon as possible So that I can slow down and maybe switch switch Wifiers and and so on. So hopefully this won't be too painful. Um, cool. So I'll share my screen I've got a few slides and then I've got a couple of demos so I'm gonna pray to the demo gods So we just just fixed a bunch of bugs. So Here we go. Um, cool. So Dot mesh is about bringing data into the circle of control But before I talk about that, um, I just want to talk a little bit about Uh, what a bad day at work looks like when you're when you're doing software When when you're doing cloud native when you're when you're doing dev ops. Um, so so we spoke to probably about a few dozen companies Who are doing cloud native doing kubernetes doing dev ops about about their use cases and about their pain points at the end of last year and the the following Memes capture some of the themes that we heard and The first one was that one does not simply capture the state of four microservices at once Uh, and the idea here is that even when you're in a development environment um, polyglot persistence i.e the fact that multiple microservices have multiple databases um results in The complexity of sharing that state with anyone Being so high that basically no one does it um This means that if you're a developer and you've got like four microservices on your laptop and they've got a redis and a postgres and a um and an elastic search then You you just don't bother trying to capture all of those states to to show a colleague an interesting state instead um, you either Get them to come and look at your computer if you're in the same building or maybe you let them have a teammate session If you're trying to pair remotely Or you depend on using a a shared staging environment in which case there's often a contention over those staging environments That's what we heard when we speak to people um The second problem we heard was and i've modified an xkcd that may may seem familiar um It used to be that the this xkcd used to say that the number one programmer excuse for legitimately slacking off was that my code was compiling um, but um It's uh increasingly the case. Um So it's 2018 compilers have got better And it's actually integration tests that that tend to um to slow people down the most now So we heard that often slow and flaky ci systems were what was causing people's People's lives to be painfully slow When when they were trying to get software deployed and and make changes to a code base um Just checking you can still hear me clearly because I just had a message flash up that said my internet connection is unstable You're good. You've been perfect. Yep Okay Cool amazing The wonders of forgy so So then um, the next problem that we heard Uh, this is a common one. Um, was that we We made a change to the software The tests all passed in ci, but then the thing blew up when we deployed it to production um and um, this is almost always because production is just a different environment to uh To any of your other test environments, however good you're testing Um, the production is always going to be different in one way or another Um, uh, even with the wonders of kubernetes and Um, uh, the fact that we're deploying the same immutable container images everywhere Um, and this has led. I mean the fact that this is hard has led to people more and more uh, talking about testing in production and testing um, uh Testing sort of canary deployments and so on. Um, but I believe that um, if there were tools to make it easier To test more realistically and to have end-to-end tests that were less flaky and more reliable um, then uh, then there would be more testing done before you expose Any traffic to to new untested code So, um, so that was that was another a common theme And then the fourth one that we heard Um, really interested in in this group's feedback on this Uh, was that well, you can put your application in containers But how do you migrate your data to the cloud? Containers really don't help you Move data around You don't ever want to put large database dumps into Into containers. They're just not designed for it and containers don't help you capture databases either um but Yeah, so this was an interesting sort of theme like how how how I mean kubernetes gets you most of the way to Real cloud portability and when I say cloud portability, I mean I include Moving data from on-prem to to to a cloud provider Um, then like how how do you manage the data migration? So Um, so if you take a step back and you look at the common theme between all of these things Um, well, there's problems at all stages of the of the software lifecycle. There's problems in dev Uh, where microservices make capturing and sharing dev states hard There's problems in ci where end-to-end tests that manipulate real databases Are slow and flaky and the more realistic they are the flakier they are and when they're flaky, it's hard to reproduce the flakes Um, we spent about a month battling that with in our own code actually um, and then um In production unexpected production now to just happen Uh, because tests just aren't realistic enough And obviously this plays together with the ci issue and then finally this cloud migration issue is that you you Containers help you get your apps to the cloud but not their data and so data management in cloud native is still kind of an open space Um, so the common theme if you really take a step back and zoom out Is that in all cases you weren't in control of data? and If you think about what modern software, um, is made up out of well any software is made up out of any software Our application is made up out of code infrastructure and data And over the last 20 years or more Code has obviously been version controlled and if you go to any team and say do you version control your code? Then the answer is yeah, duh most of the time and um and and what's more, um ci and um automated testing have made um control of code Easier as well by being able to reliably test and reproduce the inputs to various different parts of your code and so on so controlling code achieving velocity Through control for code is kind of a solved problem. Um More recently infrastructure has been moving into the fold as well, and I don't need to tell this group too much about this but of course, um We've moved now from a world of snowflake servers into a world of declarative immutable um infrastructure as code and Your terraform config and your ansible config and your docker files um and your docker images all live in um, well the docker images don't but everything else lives in version control and The the images can be created from that So this is about controlling both the cloud resources that are deployed and also the runtime state of the servers And tools like docker and kubernetes Obviously go a long way to solving to solving that and so we're left in the situation where data is sort of left out in the cold It isn't subject to the same tools and and And abilities as As modern infrastructure as code To a large extent and many of the teams that we spoke to say they were still using Sort of old school methods for managing their data Uh, they often had dba's where you had to send them an email or open a ticket to get a snap a snapshot of production data And um, and this was just slowing people down because everything else about their infrastructure was getting faster And their data was was slowing them keeping them back So our mission with dot mesh is to bring data into this circle of control um, and so that's a very broad statement um I'll tell you about how we plan to do that so How do you bring data into the circle of control? Well, we propose that you use a mesh um our mesh Uh, this is not a service mesh by the way. It is like a service mesh in that it is a generic tool um for That you can apply to any software and it will make it easier But it is it is not about um networking. It is about storage. Um, and so um, the mesh that we propose Is called dot mesh and dot hub sort of sits at the center of The mesh and then around the side here. We have various different stages of the software development life cycle um, which enable various different use cases. So the first use case is that you have Uh, a developer um on a development machine And they're able to capture the state of multiple microservices at once in a unit that we call a data dot And then once you have a data dot, um, it's possible to treat that data dot Like a git repo And so you can do commits You can do branches you can do push and pull so the developer can create a commit of multiple microservices state in a single atomic Um unit and they can push that state up to the dot hub. I think of it kind of like putting the state on the shelf um And then a different developer in a different time zone in a different country Certainly on a different computer I can pull that down to a different environment and have exactly the state of the application Not only the the code, but also the data that the developer the first developer had um, and we're seeing use cases for this around things like reproducing security vulnerabilities so developer one manages to find a security bug in an application that's only uh, that you can only demonstrate by showing the The exploited state of three different data stores at the same time because all the ids have to line up and And it involves like touching various different parts of the system Um, they really want to be able to share that with the sec ops team and so they can now do that Rather than just writing down a list of steps to reproduce They can actually share a snapshot of the entire environment with uh with that team by a sort of a snapshot in the dot hub Another use case is taking failed ci runs. Um, so taking the output Uh of the ci system and putting failed ci runs in the dot hub and that allows you to reproduce flakes Um, and just pull them down to a developer's machine to get exactly the state Of the environment that hasn't when it failed um Another use case is taking realistic data from production And using realistic data from production via some sort of scrubbing process Into a staging environment. Um, or using it to run automated tests in ci against So that you can run performance or acceptance tests against realistic snapshots from production And there's another use case which is migrating apps and their data between different clusters So notice that I turned production into two clusters as it often will be Maybe in different clouds or different regions or whatever And being able to take a snapshot of that production data for the entire application not just one of its Polyglot databases and move that to a different cluster That's sort of the the fourth and final use case So I'll pause there. Um, I've got two demos. Um, and um So the first demo is going to be the sort of development side of of the house I'll show commits branches and pushing and clone to and from the dot hub And then the the second demo is what I call a dot ops demo Which is migrating Sort of orchestrating data replication between two separate kubernetes clusters Just before I do the demos. I'll see any questions on the content so far Hey, look, this is a Go ahead Sorry, so so the first one is For taking a snapshot of microservice Um, do you rely on the underlying storage infrastructure to provide the snapshot service or that's something you implement? as part of dot mesh, what is it? So we've implemented it as part of dot mesh. It's a layer that sits between between underlying reliable storage and And and the application which is useful because it means that it it works on your laptop As well. So so it enables that portability between different stages of the software lifecycle but one thing I will mention is that It is We are absolutely not trying to implement a synchronously replicated block storage system So, um, we work in collaboration with synchronously replicated block storage systems. So We see this as is very complementary to to those systems like a a portworks or a storage os or an open ebs Or a seph or in fact ebs or pvs on on on a cloud provider In fact, we're currently working on an integration that allows us to support failover In production by just relying on the reliable disks provided to us through kubernetes using those apis I see so you you have your own but say fault system Snapshot mechanism and you don't rely on aws snapshots or as your snapshots or we will snapshots to implement the function. That's correct. And the second question is, um, When you talk about taking snapshots of multiple microservices Do you, um, in any way coordinate snapshots across microservices so that these are These are um, globally consistent pointing time snapshots or, you know, do you have any guarantees for the snapshots that you take across different microservices? But you mentioned the example of redis mysql Do you do any coordination across these micro microservices? uh, yeah, so, um, we're working on a system that will, um, not require Coordination because it will allow um consistent atomics snapshots to be taken across multiple microservices even if they're running on different machines. So One approach um, there is a different approach, uh, which I've seen from our friends at caston um, or Canister I think is the open source project but but caston and k10, um, which is to to coordinate between different services And I think it's interesting to explore both of those approaches in parallel Okay, thank you Hey, look, this is uh the some uh quick quick question. So so you you are on the data path then Yes, yes, okay, and and we're on the data path, but we're not providing all of the data path We're assuming the existence of reliable disks Like reliable virtual disks underneath us Understood and so so you you but you are on the data path in terms of, uh, you know You said you created the dot data Those are You're you're extracting all of that or packaging all of that from being on the data path Correct. Yes, that's right. Okay. Thank you Cool, no problem. Um Great. So, yeah, I'll I'll run through some demos and there'll be time for more questions afterwards. Um, so the Try and make the Zoom thing get out of my way. Um, so yeah the first It's always in my way the first thing, um, I will do is um I'll just run through So the development side of the house um Can be can be shown using this this very simple demo that we've got on our website. So If you want to try this yourself, uh, feel free to check it out afterwards. Um, or, you know, whenever you like, um And um, yeah, if you go to our website, this is, uh, under Try on catacoda. So, um, Try to tutorial and you can Kick the tires. Um So just to start by showing, um, it's very easy to install dot mesh. Um, I have here, um Uh a Linux machine and Um It's just part of this hosted tutorial environment. This works just as well if you're running it on your laptop um on macOS, um, or Or linux, um, and installing dot mesh is just a matter of running a curl to download and chmod a A go binary and then, um, we run the single command called dm cluster init Um dm cluster init, um Assumes the docker is installed. Um, and it then Pulls down the dot mesh server image. It creates a new a new dot mesh cluster And it only takes a few seconds. So the idea there is Um that even if you're using, uh, if even if you're using dot mesh on your laptop You still create a cluster. It's just a single no cluster. And so all dot mesh clusters are alike They're all sort of homogeneous. Um, and you can push and pull between any dot mesh cluster Um, so I can check that that came up Uh, yep, that's running zero dot three dot three. That's good. We released that earlier today um, so, uh, this is really Um really fresh fresh bits. Um, so I can then start up a really simple, um, doc compose application and, um I can then do a dm list and So you can see, um, I'll show you inside the docker compose application first actually um, so if you look at doc compose.yaml Looks like I've got quite a lot of latency Yeah um, so inside the docker compose.yaml, um, I have, um It's just a regular docker compose file with a web and a redis And I'll show a Kubernetes example in a minute by the way. This is just the sort of The the the start the early the the the very simple version using docker compose And it's just using a docker volume driver called dm and that docker volume driver refers to a mobi counter volume. And so when I did docker compose up on this file That's why, um, you saw Um This mobi counter in the output of dm list So now if you look again at dm list, you can see dot mesh knows that there's a mobi counter dot Um, that dot is currently on the master branch Um, uh, it knows which server it's on it knows which containers are using it and it knows how big it is so 19 kilobytes is is just the the size of It's basically the size of an empty file system Um, uh with a just a a tiny redis file in it um So I can now um commit the empty state And now if I do dm log that's the The empty state there's there's nothing in this commit at all and then I'm going to make a new branch So I'm going to create the branch called branch a and now I can show you the app So this is this is the application. It's really super simple Um, it uh, it's an app that lets you click on the screen and add logos and it stores the position of the logos in a redis database And the redis is configured to be persistent. So it's writing to disk And let me say that sorry interrupt, uh, feel free to add 10 more minutes to the presentation I think we have a light agenda after this so take your time. Whatever you need to do here. Great. Thank you Cool. Thank you. Um, so I will uh Take my time in order to spell out cncf um and um Then the idea here is that um The position of these logos on the screen is is recorded inside um Inside the redis database cncf. Yeah, I spelled it right. That's good. Um, and if I do dm list now I don't know. Yeah, I can see that there's uh, 21 kilobytes of dirty data So there's 21 kilobytes of clicking. Um, that I've recorded. I can do another commit And say hello cncf Um, and that's my commit message. Now if I do dm log Um, it says hello cncf. So just to prove to you that that's on a separate branch from the master branch Uh, I can now switch back to the master branch And notice that All the dots or all of the icons disappear Um, in fact, this is slightly more impressive if you actually pull Pull this out so you can see it at the same time. Uh, let me just do this. Hopefully that will work So, yeah, I've I'm on the master branch. I can switch back to branch a and notice that, um all of my um All of my state comes back um, and so what's going on under the hood here is that Dot mesh is coordinating switching out the state of the file system underneath the running container Um, but before but don't worry. It's not that scary We we also coordinate stopping the container the reddest container and then starting it again around that that switch of that data um, so it's done in a way that's that that doesn't Doesn't break the application. Um, it just allows us as a in a development mode to very rapidly switch between different variants different versions of development data Um, so I'll go back to the master branch again See the data disappear And then go back to branch a Hey, so look what one quick quick question some some, uh systems. I don't know if redis is one of them Require some kind of quiescing before you can take their data Because they they keep, you know, aggressive caches or or in memory state Uh, do you do anything to kind of help them put their state on disk or flush out their state to disk Before you capture it We don't have a quiesce api At the moment, but we'll build one as soon as we need it. Um, we haven't yet found an application That a user or customer wants to use that that that actually needs that We're seeing a lot of usage of Like mysql in odb or postgres Uh, where they have write head logs and the only thing that we need from the application from from the application or the database is that it's crash consistent So related related to that Yeah related to that then some some applications put mysql as being one of them actually can store multiple volumes and the crash consistency is actually a point of time consistent across those volumes say, you know, uh, a bin log or Redo log and data log data are could be separated Do you actually have a crash consistent? story across volumes So we have a feature called sub dots And uh, what sub dots allow you to do is to store Sort of more than one volume inside a volume as it were like more than one dot inside a dot. Um, so Um, uh, and the the snapshots or the commits are to use the get language Um, the the commits um are consistent across all the sub dots in a dot um, and um From a container perspective that works if the container say mounts multiple volumes Yes Okay, cool. Yeah, or even if you have multiple containers belonging to multiple microservices mounting multiple sub dots and and So you have a point of consistency across volumes of a given container Uh, yes, and potentially across multiple containers as well Okay, very cool cool, um so, um The next part of the first demo is just to demonstrate push and pull So, um I've got um a local environment here I'll go out into my movie counter I've got docker running on my mac. Um, and I've got uh dot mesh installed And dot mesh is completely fresh. Oops. That's the wrong cluster dm remote switch local You can kind of see what I'm going to do my next demo. So dm list here. Yeah, okay So this is my local remote which sounds kind of funny But you can do dm remote minus v and you can see the different remotes that are available to to the dot mesh client Um, it's kind of like pointing kubectl at different clusters, right? So in this case, I'm pointing dot mesh at the local Uh remote, which is just the the dot mesh server that's running on my laptop um so if um So what I can now do is I can go to our sass service, which is this thing called the dot hub Um, and I don't have any dots in my account Um, but what I can do is I can export my hub username just as a convenience Sleek-Marsden Um, and I'm going to go and need my api key Uh, which I get from settings and then I can copy it without leaking it um, and then I can add um the hub as a remote to Uh dot mesh inside this ephemeral demo environment that we have here Um, and I can see that I've now got uh hub as a remote here um, so I can now That um dot called mode counter Um, and I can push a specific branch of it as well and I can push that up to uh to dot hub um, and that was a small number of kilobytes Uh, and if I go to dot hub now, I can see that it's arrived and you can kind of see that If dot mesh is like git then dot hub is like github There's the sort of general idea we're going for here um And so you can see that that's arrived um, you can see that there's a branch a um, and uh, hopefully oh, yeah, my internet connection is just being slow Um, so on branch a uh, you can see the commit. Hello cncf, which I pushed from from the command line there um So the bonus section here is that I'm going to pull this branch a down onto my local machine Um, so I don't need to install dot mesh because I already have it Uh, I also already have the moby counter repo um, but what I will do is um, let me just check dm List is definitely empty. So the first thing I'll do is um, I will uh clone the moby counter repo of username Yep, so I'm going to clone that down from For dot hub. Um, and we're one way in which this varies slightly from git. Um, is that it It only pulls down the one Only pulls down the master branch There we go it's being slow because my 4g connection is being slow and um Then I can do dm list And that's pulled down moby counter on the master branch. You can also see it's pulled down that one commit on the master branch um I can um switch Make that the active dot Um, I can now start up the docker compose app And this might take a minute because I've wiped everything out Um, but the idea here is that yeah, this is going to pull down um The the docker compose application um, it's going to bring up the the redis instance um, and uh, then it's going to um Start up exactly the same environment that I had um That I had In the demo environment locally on my machine So Just waiting a second here While we wait for that I know I didn't bring it up yet Um any questions at this point while we wait for docker pull It's the best thing to do in all demos is watch people waiting for docker pull Look, can you talk about which parts of this are open source and which parts are closed source? Yes, um everything is open source apart from the web interface So it was a nice nice simple answer Yeah Go ahead What's the uh, what's the economic model for dot mesh? Is it subscription based or open core or how you guys think about that? Yeah, so for the dot hub, um, we're thinking about that. Uh, we're thinking about the dot hub as a SaaS product um, where you will pay um, uh money to store data on on the dot hub Um, we hope to add more value to the data in the dot hub so that we can justify charging above uh cloud storage fees because we don't want to be in that game. Um, the um Uh, we will be adding more features to the dot hub as well We think of dot mesh As an open source primitive and it has it's really important that the dot mesh Um is a good open source primitive. It has to be Complete it has to be production ready um I I don't want to end up in the position where we offer a sort of crippled version of the functionality Under a free license, but you have to pay us to use something that you can actually use for serious for real the um The idea is that the dot mesh is a complete open source primitive And the the value that we add in the dot hub is going to be Built on top of that open source primitive Using our own apis To implement the features on top of it Does that answer the question? Yep. Thank you Perfect timing Because we now finally have our local Moby counter here And so it's on the master branch locally, which means that we're not going to see Uh, the state here the next thing we need to do is, uh, pull down Um that branch a state Um that we pushed up to the hub And again, that will probably take a few seconds because my 4g is being slow There we go. Um And then we can check out We can do dm branch You can see we've got a branch available and we can check out branch a And then over here bingo We saw that our data moved uh from the online demo environment To my local development environment Um, so yeah, that's the first demo. Um, if we've got time for another one, I can attempt A slightly more challenging one Yeah, I think you're good. Go ahead. Cool. Um, so, um So, yes, I've got uh this other example um, so I've so far I've shown local development docker docker compose that's all fine Um But it's much more interesting to talk about production use cases with kubernetes as well So we've done the kubernetes integration. We've got a dynamic provisioner and a flex volume driver with the implementing csi and um, if you go to our set up guide, then there's Uh, instructions for gte instructions for aks on azure. Um, and also instructions for generic kubernetes so Feel free to try it out kick the tires. You can install this on a cluster and when it's running in clustered mode um It gives you all the same features that it's running Uh that that you get when it's running on a single machine um So what I can do here is um, I'm just gonna so I've got two different um contexts um in koo kuffle one of them is this Um, uh, this gke in europe like the cube ctl get nodes um, hopefully I guess this is where we learn how many round trips This takes um, what I might do is try turning off my video. Okay Yeah, that helped. Oh my latency is down Okay, so, um, I've got, uh This I've got a cluster in europe Um, and I've got another cluster in the us These are both gke clusters because it was easy But there's nothing about this demo that's specific to gke this would work from on-prem to cloud or from one cloud provider to another and so on um, so yeah, I've also got my my nodes in the us um and um, so I can then demonstrate um migrating uh Sort of reasonably substantial my sql database from uh from one continent to another um we Need to switch back to europe Um, and then I'm going to apply some manifests So there's a load of manifest and my sql manifest and one of my least favorite pieces of software ever php my admin that we packaged up in cubanetti's um and um We can now switch the dm remote to that gke cluster and we can watch um That my sql dot fill up. We were a bit too slow If you're fast enough then you get to see that going from zero dirty data to 115 megs And that's just my sql saying here are my My stock, um my sql data files I can then commit my empty state Um, and I can now do I can do dm list by the way dm list shows me my my sql dot Just very quickly. I'll show you how that hooks up into cubanetti's universe It is uh the my sql pvc Um, and that is that refers as an annotation to dot mesh namespace and dot mesh name Um, and that's my that name my sql dot is what came out here in the dm list I can now run this loader pod and um The loader pod just ingests a couple hundred meg of data from That that we just bundled into the the container image from for the loader to get us to get us Started here and it's just some fairly boring sample data about employees and departments and and so on um and so What you what you can see here is that we've deployed and sql Instance onto cubanetti's with dot mesh installed on cubanetti's and now if you open um The web interface then you can see um that we've got some employees in the employees database for example, so georgie faccello these Are completely made up people apparently so any um any similarity to real people is purely coincidental um, we can now see dm list So we we've got the full size of the dot But we've also got one of the commits and the dirty data on top of that initial commit is 280 meg so I can now Commit my bulk data set and I can push that um from europe to the us And the good news is that this is not going via my laptop because i'm pretty sure my 4g connection isn't going to sustain 20 megabytes a second um, this is uh being orchestrated by the dm client um on my machine to coordinate um that data migration data replication Between these two gke clusters. So I think the data is going over google's private network between between europe and the us um, so that's nice. Um, I can now switch over to Gk us. Um, I can see that my data has arrived um, and I'm not going to switch back to europe now, um I'm going to simulate the fact that I did a bulk transfer of a big database and that it took Some time maybe in reality it took several hours because it does actually take not break the laws of physics but I'll also simulate the fact that while that data while that snapshot that commit was being replicated um over the atlantic um more data was uh still being written to the live database in europe Um, so I'm going to simulate that with a little script that adds some more data and that just loads Uh, this sequel dump called employees extra um, and um I'm then going to switch over to europe And at this point it's scheduled downtime. Uh, we're going to try and make the schedule downtime as short as possible So, um, I just deleted the mysql pod In europe, so mysql is now down We can commit our secondary data set Uh, we can push, um, whatever the delta is from the first data set to the next data set and um Then we can switch over to our us cluster We can deploy our manifests and with any luck We can see that things are starting up So at this point it's probably just pulling some container images Okay, and it's all running um And now I can open My other ip address And with any luck I can see that my database is back up And now it's in the us So, um here we can go and look at the employees And we added some of the names of the people on our team um just for fun And uh, so you that indicates the bulk data that was transferred initially plus the delta of data that was captured up until the the the few seconds of scheduled downtime So that's just another use case for dot mesh. Um, it can be used for moving data around between production systems as as well as development um And that's it really, um I've just got one slide here which sort of summarizes it so dot mesh is an open source primitive For people using docker and kubernetes in development and production That provides docker volumes and kubernetes pvs that can be committed branched pushed and pulled like git repos But they can be terabytes in size and they can be automatically snapshotted Um, so that's the end of my presentation. Thank you very much Hey, look Sorry one more quote for data that does not live in volumes like, you know, s3 buckets or time series databases or even I mean other stuff. Do you does dot mesh do anything for those or is it? Are you relying only on stuff that lives in volumes? um, so we are interested in supporting etl into dot mesh dots from Things that would typically not run inside kubernetes um, actually this comes on. I've got a backup slide here sort of roadmap slide um We we're in the process of defining the mesh. Um, I should actually move this arrow Because we're currently working on number two We're currently working on adding production volumes into the mesh so that we integrate with reliable disks Like I was talking about earlier, but then number three on our roadmap is to bring production databases from things like rds Into the mesh because that's one of the things we learned from talking to lots of customers is Very lots and lots and lots of people are just using the databases that are provided by the cloud provider But I think it still makes sense to try and bring those into the fold by being able to import from them And bring data into earlier stages of the software development lifecycle That that might be fully sort of cloud native So that would be an example where you're not on the data path with rds Correct. Yes. Okay Cool. Any other questions? Yeah, one question. I noticed none of your dot mesh commands refer to a specific pvc your pod Does it take a snapshot of the whole namespace or how do you? specify, you know, like But your dot mesh commands include and it looks just the same as they look in docker. How do you know, like the mappings between Applications and their storage or you know, how does how does that work? Yeah, so that's a good question. Um, in order to make the commands short to type um, there's a concept of current current remote and current dot and current branch Which if you do a dm list, you'll see an asterisk next to the current dot, for example and so that allows you to um to to keep To keep those things To keep the commands as short as possible Where we've got a ticket open for adding explicit commands. So you could type dm dash dash remote equals Gke us dash dash dot equals my sequel dot dash dash branch equals master commit And that will be useful for when you're scripting things Um, and it will be useful for for when you're not interacting with things as a human So, um Yeah, it's it's just sort of client side state that's used to To make to make the typing easier and to also make it seem more familiar with respect to get as well Which has the same concept I guess the part that is not clear to me is that whether dot mesh volumes are a new type of pvs in kubernetes Or are they like they work with it's the way to populate Other types of pvs in kubernetes look So think of dot mesh as a separate system that runs alongside kubernetes on your cluster That can be deployed to kubernetes using kubernetes Like you can kucuttle apply install dot mesh And um And it's sitting alongside kubernetes with its own sort of registry of dots with their names That can also be exposed directly to docker via the docker volume plug-in interface Um, so when you refer to a dot name, that's a cuban. That's a dot mesh idea But it can be mapped to from a pvc if that makes sense And and that's why you you have the same experience whether you're using the doctor integration or the dot mesh or the kubernetes integration Yeah, it's not clear to me how The mappings happen like how dot mesh knows exactly which pvc is going to snatch or populate or that part is not clear Thank you The pvc yaml refers to a dot mesh name and namespace which uniquely identifies the dot mesh dot and branch So the dot mesh client would parse those yamls to See exactly what city should capture um The the client doesn't need to parse the yamls because the so the kubernetes yamls pin down a specific dot mesh dot um And then the dm client can also pin down a specific dot mesh dot just by name So it just relies on the names matching up Oh, yeah Thank you You you have to use uh dot meshes volumes right here. You're using the csi or the Docker volume plug-in for all this to work Yes, um flex volume at the moment, but we'll implement csi soon. Um and and yeah Um, the the the the important point though is that dot mesh On kubernetes is going to be something which both consumes pvs and provides them um Because as I was saying earlier, we're not implementing this uh, synchronously replicated storage for reliability. We're we're consuming Uh systems that provide those guarantees. Um and exposing upwards these sort of portable snapshotable um volumes or dots Look is the model that you are actually passing through to an underlying volume or is the model that you know a given container Uses the dot mesh volume plug-in and it goes to the dot mesh server Or and that that itself is using other pvs for for storage It's more the latter because we can also provide multiple dots from one underlying pv So you can get better density than like if you're using ebs as a back end for example So what can you say about like if you had thousands of containers about kind of performance aspects of everything centralizing on that on a dot mesh server? Uh, well, you can have multiple dot mesh servers. Um That's why they're called clusters. So, um, you can shard your dots across Uh, your dot mesh servers. Um, you can kind of think of it like I don't know if this is a good analogy, but it's kind of like a cloud sand in the You can have multiple back ends And yes access is going through dot mesh But the dot mesh back ends available and they can be scaled across multiple pvs That uh interact with the back end if that makes sense Have you considered a model where it's a pass through to another pv at at the container level? So you're just essentially a filter on top of Um, you know some other underlying pv Yeah, well, I mean we're sort of, um aggregating underlying pvs and using the fact that they're reliable to implement failover um While providing additional benefits and features on top Yeah, I guess I was asking is if uh, have you considered a model where you're not aggregating but you are still providing features on top So essentially you could do that. Yeah, that that could make sense I mean we could have a mode where we could have a config flag that says like um map one-to-one dot mesh dots onto underlying pvs, please um, right and uh That might be beneficial or desirable for certain use cases. I expect Cool. Oh, uh, all great questions Luke, what's the best way to get all of you? It sounds like there's still lots of questions to follow up Yeah, please do uh come and join our slack So if you go to dot mesh dot com Scroll all the way to the bottom of the page. It's a really tiny link. It's hidden under community in the footer There's a direct link to the slack invite Invite link on dot mesh dot com. So yeah, please come and join our slack. Uh, if you want to reach me personally I'm luke at dot mesh dot com Excellent. Thanks a lot, man. I really appreciate you doing the presentation. That was excellent Yeah, no worries. Thank you for letting me take almost the whole time in your meeting. I appreciate it Nice work. Nice work guys Yes, thank you All right, so, uh, you got five minutes left. Uh, just a couple administrative things so One at kubekan you at uh, eu We had three sessions. We've given up the third one. That was that late night session that didn't make a lot of sense So we now have two for the swg One is an intro one's the advanced regarding the intro. Uh, that's been moved So it no longer conflicts with the kubernetes kubernetes storage sig so, um, that's a great thing and then regarding the the advanced session That we've invited members of the toc to come talk to us about charter And I think, you know, some people on the in the group have been reached out to by camille And camille has been asking about you know, what's going on with sco g and what do you want it to do, etc So if you haven't invited from from her, please do take the meeting and and let her know what you think Because they're going to take some of that feedback and and hopefully by kubekan you we can make things more clear And so in terms of the charter and what the the toc is looking for Um, and then regarding the the actual session planning. I think that we'll probably talk about that next time and form You know from the people who volunteered. We'll we'll see who can start working on it for for the event Um Anything else then do you have any any other comments anyone else? Perfect. Thanks, Clint Thanks, luke Excellent, you're welcome All right, guys, so we'll give everybody back four minutes of their day. Thanks a lot Yes, thanks everyone. Bye