 Well hello and good morning at least here in the European zone. Today I have a super interesting guest with me and that is Luke. Hi Luke, good morning, how are you doing? Hey Michael, how's it going? Thank you for having me on. Hey that's super, I'm super excited. We're going to really talk about your new thing there, DotMesh. But before we get to that, can you give the audience a little bit of a background like what have you been doing before that? So this is your new venture there. There are many other things I can remember that might be of interest like what have you done before? Yeah so I guess I'll start by talking a little bit about cluster HQ. So cluster HQ is the company I founded previously and we were working on making stateful containers a reality for people. So basically making it possible to run databases and other workloads that include data functional in a microservices containerized environment. So I was involved in developing Flocker which was the open source project at cluster HQ. With Flocker we did a lot of work, I mean the industry was super early at that point, like that was the year that Docker just sort of started exploding. So we talked in 2013, 14? 14 I think was when we really sort of got involved. We pivoted something we were doing previously. We actually had a ZFS based distributed web hosting platform on FreeBSD that we pivoted to sort of start serving the Docker market. But the interesting thing about Flocker was that it was so early that we had to do a lot of hard work to just connect containers to storage at all. And so we integrated with EBS on Amazon and Google persistent disks. We integrated with Cinder on OpenStack. We integrated with about a dozen different storage vendors for their sound products. And we managed to get Flocker into 1.0 and into production with big customer Swisscom. So that was great. Apart from the fact that then Kubernetes came along and did the same thing. And so we needed to then found ourselves in the position where we were sort of being commoditized by Kubernetes and we were this sort of thin layer between container orchestration frameworks and storage underneath. So we realized that we had to pivot again. Unfortunately at that point we had already scaled to large really to be able to move quickly enough. And that's that premature scaling sort of believing our own hype and believing that we had achieved product market fit before we really had. I think that that's the reason why ClusterHQ wasn't successful ultimately. But I then took a year out to work at WeaveWorks and I had a fantastic time working at Weave. Really great team there. Really great products. So I recommend you go try WeaveCloud as well. And I was working on developer experience at Weave where that was super fun because I got involved in teaching and talking about everything from container networking to Prometheus monitoring to visualization to continuous delivery with Kubernetes and got to meet a lot of people and had a lot of fun doing that. But I guess I just still had the itch to come back to the container storage world. And so I then launched this project.mesh which we just launched on Wednesday this week. And it's not actually really container storage anymore. It's more about data management for cloud native applications. So we'll get to that in a moment because I'm really trying to understand from a UX point of view from like use case then on what it's about. But you're going to tell us everything about it. Just like the last thing you said there. And we know each other maybe from your six cluster life cycles. And you've been doing a lot of great work there. I hope you continue that. I will. Yes. And yeah. So I think that now the time is probably like stuff has settled Kubernetes is kind of like the container orchestration work over. It is the container orchestrator. And it's really now kind of like the time where you can actually build stuff on top of that. And actually to also be successful in terms of business, applying a certain business model. At least that's how I see it. So without further ado, let's get into that. Cool. Okay. Awesome. No. And I completely agree with you. That was another one of the challenges that frustrated us. We didn't know which orchestrator was going to win. So we had to support Swarm and Kubernetes and Mesos. But absolutely. So yeah, it is definitely easier now that things have settled. And I mean, we're even getting our stuff working on Kubernetes on Docker for Mac. And it's really nice to see that Docker have embraced like running a local Kubernetes cluster for development on Docker for Mac. Nice way. So that's pretty interesting. So yeah, I'll talk a little bit about dot mesh. I've got a couple of slides actually that I can use to sort of just paint a picture. I've really just done it in the form of sort of three memes. So let me just share my screen quickly. And so hopefully just a second. Can you see that? Yes, I can. Okay, cool. So yeah, I mean, we're both software engineers and we know what it looks like when it's a bad day at work. And so these are sort of three stories that we learned from talking to dozens and dozens of users and potential customers sort of late last year. And the first way that it can be a bad day at work. So it would be a really bad day at work if all three of these things happen. But the first one is that you have a change to an application that you're developing. It passes all the tests in CI and you deploy it in production and it blows up. This happens surprisingly often. It's happened to me. It's happened at companies I've worked with. And it's really painful because it means that you're exposing your users and your customers to failures, to errors and so on. So you have to ask the question, why does this happen? And the reason that things pass the tests and then they blow up in production is that production is just fundamentally different to all the other environments that you have. It has different data. It has different scale. It has different inputs as well. So even just the requests that are hitting production are often going to be different and more varied than the inputs that you use when you're testing your software in CI. So that's the first problem. And that's a big problem to solve. And we're not going to solve all of that problem in one go. But it's interesting to just set the scene. The second one, and I'm sure you know XKCD, I've actually modified this XKCD slightly. It used to say that the number one programmer excuse for legitimately slacking off is that your code was compiling. But actually in 2018, compilers are quite fast now. And the number one programmer excuse for legitimately slacking off 2018 edition is that the integration tests are running. And oftentimes I've seen problems at companies where you just can't seem to ship stuff. Like the dev team is just slow for some reason. And stuff is late and everyone gets stressed. And it can be a bit of a mess. And if what I'm saying sounds familiar, there's often, the cause of this is often that there's a slow CI system with slow and flaky tests at the heart of the problem. And so how can we make the CI systems faster? How can we make our tests faster and more reliable? Another interesting fact about this is that the more realistic your testing gets, the slower and flakier it tends to get. So end to end tests that test maybe 50 different microservices together using real databases are often pretty much guaranteed to be slower and flakier than unit tests that can run quickly using sort of prepackage data that's shipped as part of the test. And what do you think about that? I heard that a couple of times and when I said it myself, I got quite a lot of heat there was like, hey, in the context of containers, microservice and whatnot, we're dealing with in our space, you don't actually really do like you don't have a QA environment anymore, you don't have, you know, you don't really do tests, you directly go into product and, you know, you just expose this new version to a very small, you know, to kind of like post smoke test, you just expose it to a small part of the audience. And then, you know, if you're experiencing trouble there, well, you don't go and roll it out for everyone, you just roll it back for these 0.1%. What do you think about that? Yeah, so that's really great. That approach is fantastic. If you're at sufficiently large scale that you can get like statistically significant data about whether the new by rolling it out to a tiny percent. And I'm not saying that people shouldn't do that, that if you've got enough, if you've got sufficient scale to do that sort of canary or blue-green testing, then absolutely go for it. There's an interesting case there, though, where if the change that you're deploying depends on data, if the change that you're deploying, for example, updates a schema in a database, then you can't like fork your database into the old version and the new version. The canary approach to testing new code changes only really works for stateless applications. So there's an aspect there of needing something a little bit stronger for stateful. In the nutshell, if you're not happened to be Facebook or Google or Twitter or whatever, keep on listening and keep on doing what you're suggesting. And I'd like to talk to people at Facebook and Twitter and Google as well about what they do and so on. So anyway, I'll proceed with the third category that we have here. So this one is that one does not simply capture the state of four microservices at once. And what I mean here is that as we see a progression towards microservices, there's this thing called polyglot persistence, which is where the right way to do stateful microservices is that each microservice only talks to one, sorry, each database is only talked to by one microservice. And basically the upshot of this is that instead of having one large database at the center of your application, you'll end up with many. You'll end up with a database for your orders service, a database for your users service, a database for whatever other domain specific data there is for whatever your application does. And what this means is that when you're testing an application, when you're doing development on an application, you might be spinning up maybe four or five different databases on your laptop, if you can even spin up your microservices all on one laptop. And that means that there's lots of state in lots of different places. And it's just so hard to capture all of that state in one go in order to, for example, share a problem state that you've developed in development with a colleague to help you debug it, that people just don't do it. It's just that it's so hard that people just don't bother. You'd have to like exec into all your containers and dump their state and like zip up those states and then email it to them or something. And it would just take so long. It's not worth it. Instead, just one quick note. Not everyone might be familiar. I mean, people probably know polyglot programming, but might not really be that familiar with polyglot persistence. Just to be clear, it's like it's not a top down thing, right? It's not that, you know, oh, we have to use five different kinds of data storage. So here, we use some SQL and here's some, you know, elastic search and here's some Cassandra. It's really like depending on your workload, you know, for shopping basket, you only might need some redis or whatever and for some transaction stuff, you might need some SQL. And that's like bottom up. That's why you end up with different data kind of data stores, really, that treat data differently. And then you arrive at a century that just want to make sure that everyone is in the same page that, you know, this is really bottom up, same as polyglot programming. It's not that, you know, oh, you have to use five different programming languages. It's no, no, no, it essentially grows bottom up. Yeah, absolutely. I think polyglot persistence is sort of an effect that you see sort of an emergent behavior that you see when you do microservices rather than like the CIO saying we must have five different databases. It's sort of a consequence of doing microservices properly. And absolutely, it's not like, oh, we have to use five different microservices or five different databases, like, because it's because each team is given autonomy over developing the microservice they're working on in the language that's most appropriate for it, and also the data store that's most appropriate for it. So if you're working on the search service, it probably makes sense to use Elasticsearch as your data store. If you're working on data that tends to be sort of ephemeral, then Redis might be a good choice. Whereas if it's something like a user service, which is the sort of transactional aspect of it is more important than a traditional SQL, like Postgres or MySQL would probably make more sense. Cool. So anyway, you end up with this problem where it's so hard to capture the state of multiple microservices at once in development that people just don't do it. And so what you see instead is like, oh, can you SSH into my machine to see this, to look at this problem? Or even if you're in the same building, can you come over here and look at this problem over my shoulder and help me fix it? And of course, that doesn't really work very well if you're in a team. For one thing, it means that you have to interrupt people and you have to synchronize human focus between these environments where the data is. But it also, it goes badly when you're in different time zones or when there's lots of different teams. So I think there's a better way of dealing with this problem basically. So if you take a step back then, you can see that there are problems, the problems I've described touch all the different stages of the software development lifecycle. In production, an unexpected production outage can often happen because tests aren't realistic enough with respect to data. In CI, you often get these end to end tests that manipulate real databases that are slow and they're flaky. And in development, microservices and polyglot persistence make capturing and sharing development states hard enough that no one does it. So these are the things that we found when we started talking to people about microservices and data. And I think that if you take a step back and think about, well, what's the common theme between all of those problems that I just described? In all of those cases, they happen because you weren't in control of data. And so then the interesting question that follows from that is, well, what are you in control of and what does control mean in modern software development? And modern software is all about control. We've been able to control our code. Well, so firstly, what is modern software made out of? I think the modern software being made out of code infrastructure and data is a reasonable way of dividing up the world. And we've been in control of code for the longest time, probably for two decades, we've had version control. And more recently, we've seen the emergence of continuous integration, meaning that code is controlled by the fact that it is continuously tested. More recently than that, we've seen the development of control over your infrastructure, in particular the movement towards immutable infrastructure with things like Docker and Kubernetes, but also declarative config being applied to cloud resources with things like Terraform and the state of your server like Ansible has all sort of converged on us actually having quite good control over our infrastructure now and being able to recover from machines failing by having something like Kubernetes automatically spin up new pods on different machines based on a declarative config. And so this is really powerful. But we still are in this state where data is outside of the circle of control. And the way that we've learned that people deal with data is very often using scripts that they've written or manual processes, a surprising number of companies or maybe it's not surprising, but a huge number of companies still have DBAs and you send the DBA an email or you make a phone call if you want to snapshot of your production data and so on. And so I feel like there's broadly a space for data to be brought into the circle of control and that's our mission with .mesh and that's what we're trying to do. So the obvious next question then is how do you bring data into the circle of control? And what I'd like to propose is that you do it with a mesh. And so the mesh looks something like that. We're proposing that you include this service called .hub in the center of your mesh. And then around the edges of the mesh we've got these different environments. We've got a development environment of one developer, we've got another development environment, another developer. This first development environment might be a laptop. The second development environment might be a VM. Then we've got the CI system, of course, which is running tests against code as it flows from dev into ultimately staging and prod. Then you have staging and like you say, staging maybe is going away or maybe there's more advanced versions of staging that are happening like a Kubernetes namespace per branch for example. But in all of those cases there are often still environments in between the CI system and production. And then of course we have production itself where your workload is running and serving production traffic. And once you have a mesh, you can do some interesting things. And so the first use case that we're proposing with .mesh is developer collaboration. And it's this idea that if developer one has a problem state or some interesting state, maybe they found a security vulnerability in an app and you can only reproduce it by getting the databases into a certain special interesting state that they can capture that dot. We call them data dots. We can capture that state in a dot and then the developer can commit that dot just like a Git repo and push that interesting state up to the .hub. Another developer can then come and pull down that state and debug the problem and develop some code changes that address it. There's also an interesting use case for capturing failed CI runs. So we've talked to customers where they have a CI system that involves a lot of microservices being pulled together in one pipeline and tested end to end. And whenever that pipeline goes red, the really interesting thing is that they have to stop the entire office committing changes because they have to SSH into the test runners and go and poke around with the databases to find out what went wrong. And so wouldn't it be so much better if instead of having to SSH into the test runners, you could just capture the state of a failed CI run in a way that is sort of reproducible both in terms of code and data and pull that state down to a developer that might be wanting to reproduce that state at a later time or in a different time zone and certainly in a different environment. So that's the second use case. And then the third use case is pulling realistic data from production. And this is going more into the territory of things that we are going to be able to do in the future. I wouldn't recommend anyone uses dot mesh 0.1 to try and do this yet, but it's sort of the direction we're going in. It's this ability to take production data, capture that in a data dot and pull that down maybe every hour or every night, scrub that data, of course, because you want to remove personally identifiable information from it and then pull that down into a staging environment or a CI environment to run tests against or even development. So what you can see here is everything that we're doing is about moving these data dots around between different stages of the software development lifecycle and that it unlocks a new set of DevOps workflows. That's super exciting. Myself, I have a background in data engineering. I really appreciate and understand all these issues. I think quickly try to reformulate my own works to see if I really got it because I did honestly not get it so far. I played around with it, I looked at the blog and so on, but I did not really get it. Is it fair to say that dot mesh is kind of like the Istio for data? So essentially rather than having ad hoc solutions that say this in CI and capture that with a shell script or whatever, put it there on a three or whatever and having ad hoc things or even make it part of application, you kind of outsource that and the mesh is actually taking care of here a snapshot, here move it from that environment to that environment. In the same way that Istio essentially says, don't do that in the application, we're doing it in the data plane of service mesh and say these two services can communicate or you inject some failure or probably shoot it, whatever, the same way that dot mesh does that for your data. Is that kind of fair? Yes, I think that's a very good way of describing it. I might use that. Thank you. I think that the really important aspect of what you just said is that it's a generic solution. The word generic might not be the right word but it's a generalized solution for dealing with data snapshots across all different stages of software development lifecycle, independent of which kind of databases you're using, independent of which infrastructure you're running on, whether it's cloud or on-prem or laptops or whatever and it's about providing a set of tools that work in a consistent way in a generalized way across all of those different environments to give developers the power to get the data that they need in the place that they need it when they need it. That yields the very interesting question, the technical challenging question. I don't doubt that you and your team are able to tackle that but it does. There are many different data stores and databases on the market but all the original database, you have any kind of NoSQL, NewSQL and whatever SQL does that actually mean that for any kind of like let's say you have five different marketers using Elasticsearch, using Cassandra, using Postgres, using MySQL and using Redis? Yeah. You only have five now. For each of those data stores that you know using data store is a kind of umbrella term for any kind of database and whatever, you essentially have a kind of like plug-in driver or whatever that actually understands the snapshotting whatever operations of that particular data store and you're providing then a generalized generic interface to actually just say like, yeah, snapshot that, move that here and so on. So that would be one way to do it. It's not the way that we've started out doing it though. So the way that we have started doing it is instead to provide a layer that sits underneath each database because every database that you just mentioned writes files to a file system and so we provide a snapshotting engine that sits underneath that just allows you to take consistent atomic snapshots of the file system state of those data stores. Now that does rely on the data stores being crash consistent and it does mean that for example we rely on the fact that Postgres has a writer head log and we rely on the fact that MySQL in ODB can recover from a power outage but all of these data stores can recover from having the power ripped out from the machine they're running on and so we can sort of recover these atomic snapshots in different environments and we can take the snapshots without stopping the database. That is awesome and I think needed otherwise certain use cases might not be possible. The only thing I'm having trouble with understanding is if you're working on the file system layer and many you know bigger or there are some databases that actually you know kind of bypassed the file system they directly want to have you know raw blocks that they are dealing with how does that work in this case? So I haven't seen a database that does that in quite a long time. We would have an answer for that. We could provide a snapshottable partition rather than a snapshotable POSIX file system but we haven't seen that that's needed yet and yeah I think probably 80% plus of the databases that we see or that exist in the world probably more than 80% just write files to disk on a file system. It's definitely an interesting question though and sort of another interesting aspect to it and like sort of playing devil's advocate with myself here is that how do you then deal with sharded databases? Because how do you actually deal with if the data is sharded across different nodes? Well I'll be very honest with you at the moment we don't so at the moment we're focusing on these dev and CI use cases and so we're assuming that you can run your stack on the single machine basically but at the same time dot mesh does already support running in clustered mode. It just has the restriction that each data dot is only on one machine at a time but you can have many of them spread across many machines if that makes sense but there are definitely some interesting questions around how to deal with being able to capture the state of a sharded or distributed database and I'll just say that's sort of a research topic that we're investigating at the moment and we hope to have a solution for it in due course. That's brilliant I mean I personally prefer these kind of like upfront like saying look this is what you can do and here are the limitations or whatever over like you know we're going to solve all the problems of the world and boiling down to well actually by the way in your case it doesn't work. I will have to research and follow up regarding the I thought at least in my experience many years ago that Oracle would be one of those who actually you know directly a really relational databases and then probably you know you're spot on with most of the like you know elastic search and whatnot they actually at the end of the day just you know have some local file system X4 or whatever that let us all have a lifting and they have just working on top of that file system. Yeah The sharded thing like that is that is something where you maybe even you know don't need the solution yourself you might need to plug into solutions of others and might draw from the clients you had with Flucker. Yeah I mean there's actually a really interesting project called Canister from a company called Casten and I was speaking to them yesterday and there's interesting opportunity to potentially collaborate around that because Casten are taking the approach to that you first described which is that you integrate with each data store using the data store native backup tools basically. Which is interesting. It can work with that it can work with sharded databases because the sharded database knows how to back itself up. Right. So it's definitely it's really cool that like that that Casten and .mesh are like exploring these two sort of possible solutions to the space in parallel and also the other thing that's really cool is that Canister is open source which is the project that Casten have for actually integrating with these individual data stores. So it may well make sense that in the future we can leverage some of that and use that project. Right. And like shifting gears a little bit there is that not necessarily new it's I think alpha now initiative I think when it is actually wider container system called the CSI storage interface. Can you kind of like for audience put what you offered here with .mesh put that in relation or you know what is it the same does it overlap could one influence the other or how does yeah. So so my understanding of CSI the container storage interface is that it's exactly that it's an interface. So it's a set of it's an API that you implement if you're a storage provider. And that's really nice because Kubernetes is not saying we're going to create storage saying like we're going to let everyone plug their storage into Kubernetes. And so for us with .mesh we already plug into Kubernetes. So we have a flex volume driver which is sort of the precursor to CSI. And we have also a dynamic provisioner which is the new way that that Kubernetes allows you to do lifecycle operations on volumes like create and destroy. And then the flex volume driver is the thing that does attach and detach. And I believe I need to look more into CSI but I believe that CSI covers the flex volume side it's like the new version of the flex volume driver. And it's just a nicer interface than the flex API I think and we'll definitely support that when it's ready. The other interesting thing is that Kubernetes is working and SIG storage is working on on snapshot on snapshots and it would totally make sense to expose .mesh commits as snapshots through the Kubernetes snapshot API once that exists. So we're going to be getting more involved in SIG storage and we're going to be helping to actually write code and offer our engineering efforts to move all of those efforts forwards a little bit and to make sure that we can provide implementations of those APIs as they develop. Totally makes sense. My hope actually is going a bit further really that the spec, the emerging spec or whatever standard whatever in that space is actually informed by and influenced and shaped by what you guys have put forward there because to me that makes a lot of sense. I mean there are not that many generic or actually I only know your work there your offering there that actually does that in the snapshotting part and context containers and so and so you know actually it should be the other way around like you know .mesh should be the reference implementation or whatever and the spec should flow from there. But okay we're getting cool. We'll see I mean I don't want to go charging in and telling everyone what to do I'll go and listen and get involved in the open source effort and get it going away but yeah no that absolutely cool. So go ahead. I was going to say I've got a short demo that I can share please yes. Yes cool. I tried it out I think yesterday and it's like in terms of flow like awesome I just didn't get the underlying motivation. Absolutely cool. Where would I use that? Yeah so actually I'll just point out a couple of things. So before I go through the demo you can see my screen now right? Yes sir. Cool. So I'll just point out a couple of things. The website at .mesh.com is now live and there's install instructions on the homepage because we put loads of effort into making it just really easy to install and that just sets up a local Docker environment with .mesh. And then also I just wanted to point out the doc site so I'll just open this in a new tab. So the doc site is linked to from the homepage and has lots of interesting information about what .mesh is and how to install it includes more detailed installation instructions on Kubernetes for example. So there's a set of tutorials here we launched with with nine tutorials and it covers sort of the hello world and then using .mesh as a library collaborating with .mesh the hello world on Kubernetes and then also capturing failed states in CI which we've got working so far with Travis Jenkins and GitLab and the GitLab tutorial is especially cool because the Travis and Jenkins one integrate with Docker but the GitLab one integrates with Kubernetes so go and check that out and then we've got a simple example of .mesh with my SQL and then also a sub dots tutorial. So what I'll show you today is just the the hello world but what I think the point at which .mesh gets really interesting and powerful is when you start looking at the library and collaboration use cases so I just want to show you a section of this of this tutorial. Basically in this tutorial we've got a sample app that is split up into microservices and it just so happens that there's four bugs in the app and well actually there's three bugs and there's one case in which case in which you'd want to capture some data to use later but the most interesting bug is this security vulnerability so it's we've discovered or a developer has discovered how an unprivileged user can set the default image for all new users. The app itself is just this silly little app that lets you like click to add logos on the screen but the version of the app that's used in this tutorial is is one that also has a user service and allows users to sign up and log in and so there's a security vulnerability but the security vulnerability depends on state that is in these three different databases at once so we've got Postgres sorry a Redis database for this clicks DB we've got a Postgres database for the users DB and then we've got files that the user that users upload and so this comes back to this sort of one does not simply mean now it actually is possible to capture the state of all three of those databases at the same time and then you find these bugs and it turns out that this security vulnerability bug actually depends on the state in the database being in a certain way for all three of those databases so what dot mesh allows you to do is to capture that security bug and then over in the collaboration tutorial we pretend that we don't know how we caused the bug which is entirely plausible it might be possible that a developer comes across the security vulnerability and doesn't actually know exactly what happened and and this comes down to the question of as software developers what do we spend most of our time doing very much very often we spend our time looking for clues um so we're what we're doing is we're inspecting state to try and find clues and form a hypothesis um and so what you can do is with dot mesh we're giving you another dimension on which to find clues it's no longer just oh this version of the code calls this problem it's this version of the code together with this particular interesting state that might be spread across three different microservices databases um so anyway that's that's basically just sort of what I got excited about when I was writing these tutorials is this idea of um you can actually use dot mesh to to find clues for my hypothesis um and then uh do what we call reducing the mean time to clue um and if you can reduce the the average amount of time it takes for a developer to find the first and then the second clue um then you can speed up software development and I think that's that's where things get really interesting quick question like I did the um the kata kota one that you have online there which is super smooth you just go there click click click you don't need to set up anything look we uh awesome um that reminds me a bit of you know in in again coming back to the analogy with with the service mesh in in istio there is this thing that you can inject failures right you can say oh um you know every third one is a four or four or whatever you know oh you can try it out is there some coding thing in dot mesh you can actually kind of like you know inject like schema broker schema whatever is there or you could use dot mesh to capture a schema that you'd broken in a certain way or some data that didn't upgrade properly when you tried to apply a schema modification to it and and I believe that's valuable when dealing with um with with trying to collaborate on uh with other developers about states you basically you build up this library of states in the dot hub which is our SAS service um of interesting states that are interesting to the entire team and which anyone can take off the shelf at any point and just reinflate and you've got your code and your data um in the same place rather than just having a code and address the data in the um dot hub is encrypted or not that's something that we've heard um probably a requirement or so yeah it's currently not encrypted we currently encrypt in-flight data to and from dot hub yeah but not at rest yet um but yeah that's that's clearly something we're gonna have to do upcoming we've got a few more minutes um if you want to like talk a bit more like you know roadmap items or whatever kind of like you know what's coming up in the next release what do you plan to do in the next couple of weeks or months besides getting a little bit of sleep can you imagine yeah sure um so I mean just before I before I touch on roadmap um the uh uh I kind of just showed you the tutorials I didn't actually go through the demo um should I should I go through this demo yeah sure sure let's do yeah that's awesome yeah so so just for for everyone who's who's watching um I also encourage everyone to try it themselves so you can go to dot mesh dot com slash try dash dot mesh it's also linked to from the homepage and and you get this little tutorial here um so so what we've got here we've got this host environment we've got a real uh computer running here we can see um I think I need to reload the page um so we've got a real computer here you can see it's actually running Linux um and uh then we can run through the tutorial really quickly so we can install the dot mesh client binary and then we use the dot mesh client binary uh to create a single node dot mesh cluster that's running on this machine and the only dependency here is that you have a computer running Linux with with docker installed so this essentially sets up your dot hub or no it sets up a local dot mesh cluster that can then push and pull data to and from the dot hub got it cool so we can check the well yes it's it's up and running it's running the zero dot one release that's wonderful and then we can go and clone this this demo repo now this is the um the sort of simple version of of the uh of the application I was talking about it's the one before you have multiple microservices so this is why I call it sort of a hello world because it's just really super simple so you can run this dm list command and you can see that just spinning up this docker compose file has created a data dot and um I can show you how that happened by looking at the docker compose file here um and uh so you can see that there's um under this redis in the docker compose file you can see there's a volume driver called dm specified and a volume called moby counter and that's why uh when we spun up um the docker compose uh dot mesh created the volume called moby counter because it didn't exist yet but I can now do dm switch to moby counter and dm switch is a little bit like in git you have to cd into a directory before into a repo before you can do things um of course that there is no manifestation of the data dot on in the file system directly because it's all attached to this redis container so your dm switch is kind of like cd um and then from there on in it's it's just regular sort of git like commands except they operate on on a database or or potentially a set of databases and that's where the the library tutorial comes in because it actually runs three different containers on the same dot in different sub-dots um but anyway the the simple example is that we can then capture the empty state we can check out a new branch uh you can then load the app up um in your browser and uh let's put an a on the screen and this uh that creates data right that's essentially populate that creates data populates the data store exactly so this is putting the location of each of the clicks in the redis database and you can see if I reload the page um then it must be persistent because it remembers the a um but then the next thing that we can do uh is that we can capture that state with a on the screen and the other interesting thing is that we can then go uh onto the master branch and our data's disappeared because on the master branch there was no a on the screen and so we can now actually make I'll go a little bit off-piste here uh we can create a new branch called branch b um and then the uh now we're on branch b we can do dm branch to to see that now on branch b and we can put a b on the screen we are going to award prizes for the most creative art that's being created by the uh a b on the screen um and I can just do dm checkout branch a um and I switch that state from b to a and then I can go back to branch b and um and back to b so it's this basic idea that you can start to treat your containerized databases like git repo um and that that works even if you have more than one microservice and you can capture a single atomic commit that captures the state of of more than one database at once so I'll leave the demo there because I know we're short on time um I encourage everyone to try the demo because the next step in the demo is pushing that branch to the dot hub and then uh pulling it down onto your local machine to prove that you can do data around but um yeah I'll leave the demo there to give people something to try at home as well perfect quick quick question so the dot hub which is essentially this is central like like github for for code um currently is essentially your ses offering so you that's right essentially using that part but I I suppose if I want to have that you know in the enterprise behind the firewall then I'm going to reach out to your big sales team that will tell me how much money I have to put in yes uh so that's that's very well said um and um I'll I'll just I'll just bring up our pricing page quickly because um if you go to dot mesh uh then there's a section on pricing um it before I talk about pricing it's very important to point out that dot mesh itself is open source um it's available on github at dot mesh io dot mesh um and I'm a strong believer in the necessity that the open source is is feature complete and powerful and that's why dot mesh the open source um is um it supports clustering it supports everything that you can do uh with the dot hub today apart from the web interface um and so if you want to run your own version of of dot mesh on premise then you can do that just by picking up the open source and you can you can run it and operate it um with that said however um we are a business and we do need to make money and so we're offering a sort of a hosted version of dot mesh um as at dot hub dot com um and you can get dot hub dot com and and see that it's uh uh it's a SaaS service that um for example I'm I'm logged in here as Luke Marston I can see my different branches it looks and feels a tiny bit like github um and um and this interface is is uh is this is the start of the thing that we're going to turn into an enterprise version so on the pricing page you can see that's there's a free tier so you can come and try it for as long as you like uh with with a gig of storage for free um as soon as you bump over that that one gig limit then it's a very very simple ten dollars per per user per month for our for our developer accounts which is um it's the price of the the second cheapest uh digital ocean droplet so we've we've priced it sort of with respect to to to what developers are used to paying for things but then as you start adding team and and business features and functionality um the price goes up accordingly and then there's the enterprise version which has we developed more features in the dot hub that are specific to the SaaS um then uh those are going to be the things that eventually um turn into we hope sort of the uh the more scalable business model um where you can run a version of the dot hub on premise um and we can help and support that that's that's super exciting and i finally like i can really say all right i i was excited yesterday but now i kind of like i got it it's then what you're you're doing here um and i will totally you know the currently still on the free plan and that helps i'm gonna upgrade to the developers i'm gonna use that it's really awesome i i love it i love uh i hope people out there can appreciate it much as i do because awesome you do need a little bit of a background in data to to really get it that it's this is really kick ass this is really that's the future and thanks a lot for your time look i will see you soon in person and continue the discussion over a pint or whatever but um congratulations again this is really awesome and uh i hope that you know whoever has a question would just um be able to reach out via your support channel slag yes absolutely that's worth mentioning um please do come and join um just on our homepage uh it's actually a little bit hidden at the moment i want to move it more obvious but there's a little slack link down here um so please join our slack that just takes you straight to the slack invite page um so yeah come join our slack and chat to us give us feedback uh or reach out to us on twitter um twitter dot com slash get dot mesh uh because dot mesh was taken so um yeah uh really look forward to continuing the conversation and um thank you michael for taking the time awesome thank you so much okay cheers bye