 A couple of pretty beefy servers, 150 bucks a month. It's not all that cheap, but that was a good way to still do my own stuff, mostly before any of the cloud thing, the whole cloud thing was really a thing. But what did I really want out of it? So in a nutshell, I wanted a pretty simple platform that can host containers of your tool machines for whatever I want, really. It needed to be very easy to create additional instances, very easy to set up networking and storage, and also wanted the ability to easily delegate access and resources to friends, family, other non-profits, that kind of stuff, so that I could use the extra resources I had to share with others. And also want this to have as few moving parts as possible, because I don't like fixing things. And when you've got 50 different pieces of software all combined together, there's a tendency for one of them to go wrong reasonably often, especially when you apply all your updates all the time to make sure nothing is bad from a security standpoint, eventually something will break. So having fewer pieces meant that I was able to get more knowledge on those few pieces, really understand them, and if something does break, actually had a clue how to go about fixing it. I effectively wanted the ability to tolerate the loss of an entire system. That was something that was getting a bit problematic with all of those dedicated hosting providers. They're pretty good at coming to replace a disk if one fails, but the data that's on it is gone. And if a server itself, its motherboard gets fried or something, it can take a while for it to come back online, and then you may not get the same hardware back online. And with those, having a spare machine also, the price really increases quite a lot, because each of those are meant to be really completely independent of their own internet access and their own everything. So yeah, I wanted something that could tolerate the loss of an entire system should it happen. And with a clear path for recovery, not just theoretically, we can do the system and it will be fine, but I want this to kind of be stress tested every single week so that when that happens, I know how it works and it will be fine. And kind of related to that, I wanted the regular maintenance to be very easy to perform and without any fear of breakages. Running my own servers for a long, long time, I remember the days where people were looking at the uptime and be like, look at my server. It hasn't been rebooted in a year. That's great. Well, it's not so great really. It also means you haven't done security updates in a year. And that usually means you're also absolutely terrified of rebooting that machine, because you've not done it in a year. Will the machine even come back on the network? What do you do? Do you need to actually call your data center or call someone to go and hook up a KVM on it and figure out what the hell's going on? It's a bit of a problem. So why not just use the cloud? That's the first reason, really. The cloud is not cheap. It really isn't. It is cheap for some stuff. The cloud really started as a way to cheaply handle temporary additional load. So if you're running an infrastructure and you need to deal with a Black Friday sale or you need to deal with Christmas holidays or something or not, it's great because it gets your additional capacity pretty much instantly that you can get rid of as soon as you no longer need it. And it's amazing for that. But keeping things running year-long with reasonably large instances in the cloud can be pretty inexpensive, especially because the clouds have a tendency to charge you for things that you're not really used to being charged for usually, things like network consumption, storage, backups, those kind of things. And it's usually priced in a way that makes it very tempting to use the service, but then very expensive to. It's very easy to back up your data. It's going to be cheap. But actually restoring it or moving it elsewhere is going to be quite pricey. So there's a bunch of that that can make things quite expensive. The cloud definitely has its uses. It's great if you need to run your application in a very geographically distributed way. If you need small instances everywhere, amazing for that. If you need to deal with temporary load, really spark in usage, amazing for that. If you're looking at renting three fixed beefy instances year-long for the next decade, maybe not so amazing for that. The other thing is that the cloud got extremely complex. This is an easy reference chart from Google Cloud. I'm not sure I really agree on the easy part. It definitely is a reference sheet. But yeah, this is not simple. I mean, the vast majority of those services you can usually ignore for most day-to-day things. I mean, you're just going to need your compute storage network, and you're mostly done. But that might not be the best way to optimize cost. Often the best way to optimize cost is to actually make use of a lot of those services, which means you need to learn about a lot of things. And by the time you've learned about a lot of things about your cloud provider, now you're going to notice you're extremely vendor locked in. It's going to be extremely difficult to move to another one because they're going to have a sheet as confusing, if not worse than this one. But not quite the same way, and not quite the same APIs, and moving cloud to cloud gets pretty tricky. So if you want to avoid the lock-in, then you need to do everything yourself in the cloud, which again gets very expensive. There's also the slight issue that cloud platforms are generally not open source. They can change things under you, and if you don't like it, well, go elsewhere. There's no way for you to go and fix things. There's no way for you to contribute additional features or to do any of that kind of stuff. So how to build your own thing, pretty much. That's the technologies I went with. I mean, I'm slightly biased on the third one, for sure, because that's my own project. But I think I was looking for a way to have redundant distributed storage network compute at a pretty small scale that can run on pretty cheap hardware. I mean, I've tested it on Raspberry Pi, it's for the low end of that. Obviously, what I'm running in the other center is a bit fancier than that. But it can be pretty cheap, and you can try it very cheaply for yourself. All of those are open source. They're all available in the variety of Linux distributions. They're all stable. They all have frequent releases. They all have LTS releases. And they're also pretty good at not overstepping on each other. Ceph is, yeah, it's got a lot of features. It's quite complex, but it's all storage. That's all it does, and it does it pretty well. Oven is just networking. It also has a lot of features, as you would expect, from a modern SDN-type solution. But it's not trying to get into doing storage stuff or dealing directly with containers or instances or that kind of thing. It's doing just network. And similarly, Lexity is really doing instances, containers, and virtual machines. It does just doing those kind of bits for redundancy, but it does its own thing. It's not trying to re-step on everything else either. So kind of going through all three bits with a bit more details. For those who are not aware, Ceph is, I guess, kind of the facto solution now for distributed storage. It provides a block-5-system and object storage, supports things like snapshot applications. It's got some fancy background tasks and management services. It's got observability built in. It's got a lot of features that you want to run your storage and figure out what's going on, set different classes on storage, because you might have a mix of hard drives and SSDs and NVMEs and want things to be split whichever way you want. You can control the exact application you want. It's reasonably modular by design. So the control plane bits are completely separate from the demons that manage the drives themselves. And it's highly available. It uses ParkSource as a distributed database for that, because it's a modern distributed database. It requires three systems, minimal for HA of the control plane. It's open source, released under the LGPL, and it's supported with major releases every two years. That gets, I believe, up to five years of support. It's effectively LTS releases every two years with frequent bug-fix releases. On the Oven side is a pretty modern SDN. This one has a lot more alternatives these days, especially if you look at the SDN solutions for things like Kubernetes. There are a lot more players there. The benefit of Oven is that, first of all, it's based on OpenV switch and on existing kernel modules in the mainline kernel. You don't need to add additional vendor kernel modules and that kind of stuff. It's not dependent on any hardware infrastructure. It's all upstream. It supports hardware acceleration, so you can have it use very fancy metanox nicks, for example, and big ideas these days, and do the offloading of all of the flow rules onto those nicks, which then lets you do 100 gigabit instance to instance type traffic. And we've measured that with, like, not quite than 100, but 92 gigabit or something was good enough. It supports kind of the basics you'd expect. It does distributed switches, routers, load balancers. It supports access control lists, DNS. And all of that is effectively done by generating flow rules that are then distributed to all the systems participating. So there's no, like, whichever is supposed to be the active router is only the active router for in-dress traffic. Traffic that goes instance to instance just goes directly through tunnels between the machines, making it extremely, extremely fast and using very few resources. It's highly available. This is a rough database for its configuration. So it requires three systems as well. It's Apache 2 licensed. It's got stable releases every three months. They do LTS every two years. So another nice and stable project we can rely on. And then on the next decide, as I said, slightly biased because it's my own project. But it provides system containers and virtual machines. It's got this rest API that makes it easy to drive remotely. It supports clustering similar to oven which is a raft database with three systems minimum for HA. It integrates natively with both self and oven. So that works nicely. And it's image-based. It's got images for a work ton of distros at this point. I think we generate something like 375 images daily. So pretty much any distro you can think of, we've got images for on all architectures. And we build both virtual machine, container images, some with CloudNet built-in, some without. Perfectly, we've got images for everything that you might want. Its clustering is very nice and easy to use, which I'm going to be demoing next. And it's Apache 2 licensed, monthly releases, LTS releases every two years to be support for five years. So very similar to the other two as far as upstream stability and compatibility there. So just a quick demo break here. I'm going to show how to set up the three nodes next to cluster. Because that's nice and easy, so I might as well show it. So I've got three Raspberry Pis here, out by one, out by two, out by three. And I'm going to be running next to init on the first one, which then will prompt you for a few different things. The first question being whether you want clustering. There we go. Clustering, yes. Bigs up its IP address, that's fine. We're not joining an existing cluster, creating one. Hostname, we'll just pick the local hostname, it's fine. Don't need to set up any kind of password authentication. That's actually a weak point. If you turn on this password authentication, it's less safe than the token-based thing that's enabled by default, so don't do that. Storage, we can use, in this case, ZFS. It's the default there. But we also can do BurrFS, LVM, or indeed Ceph, as I mentioned. So no remote storage right now, because I don't have time to demo Ceph right this moment. And it's going to create a smaller overlay for us, not using OVM, it's using the Ubuntu fan, because again, OVM takes a tiny bit of more resources to set up. And that's the first one done. Now, what we can do is select C plus to add, R by zero two, that gets us a token. It's just going to be fun to copy-paste because of my screen session, but that's my fault. So I'm going to run next in it on both of the others already, because it takes a tiny bit of time to generate its certificate. And once that's done, we'll just say yes again to clustering, but this time joining an existing cluster. So do that, joining the same cluster, yes. And it's asking for the token now. So I just need to copy-paste this thing. I said I suddenly need to copy-paste it in two chunks, because my screen is a bit weirdly set up here. Go. OK. And it tells me that if there's any data on this lex data, it's going to be wiped, because it's joining cluster. I can override some storage settings. I'm not doing that. And a few seconds, and this one is joined. Now, rinse and repeat for the last one. So joining a cluster, do I have a join token? Not yet, but I'm about to. There we go. So let's copy that join token. The token effectively includes a unique secret for joining, but also includes the certificate fingerprint and IP addresses of the existing nodes in the cluster. So you don't even need to say what you're joining the token effectively includes all that data already. So same thing, answer questions, and we'll be done. So if I do cluster list, I can see that this particular lexity has now three active machines. The database is running on all three of them. The leader for the database is currently the first one. And if we just want to launch an instance, let's use alpine, mostly because of size. It's super tiny, so there's nothing to do. There we go. And we have ourselves, in a few seconds, an instance with an IP address. There we go. Can go in there, and then ping Google, and that works. So going back here, that's just setting up a very simple lexity cluster, three machines, AHA works, pretty easy user experience. But that's not what I'm running in production. So let's look at what I ended up with. So that's in my Kolo rack in Montreal. That's three machines. I bought them off eBay for, I think it was four grants total, but the machines themselves were much cheaper than that. The machines in total were probably two grants they were about. The rest was because of bad news storage. I don't want to buy storage off eBay. It's a bunch of disks and SSDs and stuff, bad news. Each are perfectly identical systems. They are dual Zions, eight cores, 16 threads each. They run 64 gigs of RAM each, 10 terabytes of hard drives, 2.5 terabytes of SSDs, that being a mix of SATA and NVMe. They've got 10 gigabit networking. That part is pretty useful for SEF, if you're going to run that in production, having decent networking is quite useful. Otherwise, you're going to be bottlenecking your network pretty quick. And the hosting fees are pretty cheap in Montreal. With the 50 terabytes of bandwidth I've got, it's right around $250 US for that, which considering I was paying $150 per server in rental before, if you can keep those machines around for a while, it actually ends up being pretty worth it. Plus, you get to do things you can't do with dedicated servers or in some cases, even with the cloud. In this case, each of the machine actually has a 20 gigabit connection to the other, being directly patched into each other. You don't get that when you just get like random cloud instances or rental server. They're going to be very independent. And I mean, I went slightly crazier because I'm a bit of a networking geek. So I also got my own ES number and IPv4, IPv6 public allocation. And I've got direct access to the internet exchange in Montreal over a separate dedicated 10 gigabit link. So I've got pretty crazy connectivity. It's all nice and fun. And it's still at the budget that I can consider that to be a hobby, which is pretty nice. But again, that's kind of the overkill thing. At home, you could do it on a set of Raspberry Pi and it should be way cheaper. Now, at home, I've got also a crazy setup because, yeah. I don't actually have a cost for the machines because it's pretty much all recycled stuff, like old machines from work, old machines that friends gave me, some random stuff I bought here and there for extra memory and storage and things. So I've actually ended up with seven machines in that cluster. A whole bunch of them are ARM servers, actually. So four of them are APM exchange type servers. That's the first to use there because they're actually super macro chassis that can take two servers per you, which is convenient for density. Then I've got a bigger super macro Intel server. That one, we run a whole bunch of VMs and CI and stuff for the XD project on. So it's got 640 gigabytes of RAM. So that one is pretty darn beefy. And then I've got another prototype ARM server. I can't actually tell you who the vendor is because I'm not allowed. That's got 48 cores, 64 gigs of RAM. And then my seventh server is actually like a tiny Libre computer onboard that's got just four cores and two gigs of RAM. And it's in the same cluster, that's perfectly fine. And that gets me actually a cluster that's able to run containers, virtual machines on Intel, 3264-bit, ARM, 3264-bit. And yeah, it has a lot of resources, very flexible. Storage-wise, I effectively picked up everything I had in my basement and just dumped it all into those machines. So at this point, I've got 48 terabytes of hard drives, 3.5 terabytes of assorted NVMEs, can I transfer those machines, and 18 terabytes of SATA SSDs. Most of them are right about to die because they're very old drives that I just picked up from elsewhere and are throwing a whole bunch of smart errors and stuff. But until they're dead, might as well use them. And it's all on like 10 gigabits networking. I've got a UPS because power cuts are a thing. But yeah, that's definitely a home setup which, again, a bit crazy, but that's mostly recycled stuff that was laying around. So instead of having all of those machines be completely separate and, you know, having to connect into them and kind of forget about what they do and all that kind of stuff, well, at least they're all in one nice cluster that I can talk to and get whatever I want out of them. So it's nice and convenient. So, how do you build your own? That's where things start to get slightly tricky. The hard way is deploy SF cluster, which sounds maybe easy for some, I don't know, but it's not quite that easy. A lot of those projects do have good instructions. SF has a few different ways to be deployed. The recommended way these days is effectively using some Docker containers, which is not a great fit if you're gonna be running LexD on those machines as well because they're gonna be stepping on each other's feet. You don't really want that. Another option, what were the other options? Well, so there's SF Ansible, which works pretty well. It's actually what I ended up using for some of those. And otherwise, there's SF deploy, which is effectively deprecated, but still works really well and all it needs is SSH and it just goes and do things. Then, Oven, not quite as nice to deploy, unfortunately. They've got some documentation that is pretty sparse. It's mostly kind of open stack-specific. It's not amazing. It's not very difficult to do, but there's no nice automation for it, which is slightly annoying. The LexD side, I mean, I showed you how to do it just a few minutes ago. That part of it is pretty trivial, so you can do that. And once you've got LexD, you can connect it to SF and Oven and enjoy your new personal cloud. As part of the dual pandemic thing, I've been recording a lot of YouTube videos on the LexD channel on YouTube, and that includes actually how to set up SF and Oven by hand. So you've got step-by-step instructions there if you want to go down that way. It can certainly be simplified by things like in several playbooks, as I said. There are some for SF that work really well. I believe I've seen some for Oven that didn't work so well, but they exist. It's not super amazing, because it's not as nicely reproducible as you might want for a session like that. In my case, I'm using a cluster like this personally. I assembled it once for that ascenter. It's fine. But if you're thinking about, you know, if you're a telco or if you're like a retail store or something like that, and you really are dealing with thousands, tens of thousands of edge type locations, and you want to have that small cluster that you can show work at each of those locations, it's not so great to set up by hand. But as it turns out, we're trying to improve that. So one thing we've been working on, which isn't quite what we want yet for those kind of solutions, but is a good step in the right direction and a good step for people to experiment with, is using Canonicals on JuJu, which is effectively a, I guess I could define it as a bit of a cross between an Ansible and Terraform to an extent. It is, like it's far more than a configuration manager. It is a deployment tool that can take a list of services that you want to deploy, in this case, oven, surf, lexity, and can do the deployment for you against a variety of different subtracts. In this case, I'm using lexity virtual machines because I'm me, but you could use the public cloud, you could use bare metal machines, you can use those kind of things. It works pretty well. You can show that bundle, it does a deployment, you wait 10 minutes and you've got something that works, which is really nice for testing. It does have this type of issue that is not currently capable of doing a proper AHA on three machines type thing. It needs more machine than that, which is problematic if you're really looking for that tiny, tiny footprint. But if you want to try a solution like this in a nice and easy way, you can use that and then you can really do it by hand or using, it's about using something like that on fewer machines. Which brings me to actually showing that juju thing. So, if I switch over to this machine, that's my desktop back home and if I do juju status, let's do color, and go and, oops, go and look at what that shows. So, it shows at the top the list of applications being deployed. So, we can see it's deployed a SEPHman, which is the SEPH API, effectively, SEPHman, it is. SEPH OSDs, that's the SEPH disks. It's deployed LexD, it's deployed the oven central, which are the oven control plane, and oven dedicated chassis, which are, that's the bit that needs to go on every one of the machines. It's also deployed some additional bits. We've got integration with Confuse and Grafana for LexD, so it deployed that so that you can get nice observability of the work that we're running. It's also deployed a HashiCorp vault to store the keys needed for oven, in this case. That's not strictly needed when you deploy things by hand, you can totally just generate your own PKI on the side and put it in place, that tends to be way easier than running something like vault just not, but juju can do it and just did in this case. Then we can see the status of all of the different things being deployed. This is currently deployed against, I believe, seven systems. One is used to co-locate all of the infrastructure type services, so it runs Prometheus, Vault, Postgres, all of those bits there. Then I've got one that I'm running dedicated for Grafana, just to have an externally reachable IP address on it. And then I've got five that are running LexD, SEPH OSD, so the SEPH disks, and oven. You can see here that, so the bottom is the list of the machines that are currently being used, and we can see the slash LexD shows that for that machine that's running all the services, juju effectively creates LexD containers on there to place the different services still isolated from each other, just nice and convenient. Now, this is effectively a functional cluster if I do juju, SEPH, and one of the machines, there's things you want to use there, yeah. And I go on there, and then I go look at LexD cluster list, we'll see that, oh, I picked the wrong machine. Let me pick a machine that's actually running some, it's actually part of the cluster, this one should be. Okay, so we said juju deployed those machines and set up the cluster for you, and all of that is in place. Now, even though it deployed SEPH and oven, and put all the credentials and things in place, LexD has not been configured to use that yet, so if we look at the storage, for example, we only have local storage setup right now, and for network, we've got a LexD fan bridge that's set up for a much simpler overlay, but it's not yet integrated with oven. So, the first thing I'll do is show what was actually deployed, so that's that YAML file here. Each of my systems has a dev SDB, that's the disk that I want to put inside SEPH, so that's what's configured at the top. We also configure, while tell juju, how many SEPH monitors of the API servers are gonna be running, three in this case, and how many total disks to expect, in this case, it's five. And then there's a list of machines, can put some restrictions for their size and stuff there, which they don't apply here, because they use pre-existing systems, and then the list of applications. This particular, it's called a juju bundle that's actually included in the documentation for the LexD charm, so you can easily find it, find it online. And then it includes all different services, and at the end it includes a list of all the connections between all different services, so that juju can integrate everything nicely. But as I said, now we've got SEPH in place, oven in place, LexD in place, but it's not actually integrated with LexD yet. So I've got another script I wrote, which is the post deployment. And the first thing that this thing does is it exposes both Grafana and NextD to the public internet, so I can easily reach that from my laptop. Then it connects to LexD itself, so it uses a juju action to go and trust my client, so it just adds that into the trust store for LexD, and then starts using it. Once it's done that, it tells SEPH again to juju to create a new SEPH pool, and adds that SEPH pool to LexD. Once it's done that, then it does the network side of things where it creates an uplink network. I've got a dedicated NIC for internal traffic on each of those VMs, so it's using that. It sets it up with a de-applink network. Then it creates a bunch of oven networks. Those are effectively properly distributed oven networks. And lastly, it spawns a whole bunch of instances on that so that we can make sure that everything works nicely. So let's run that script and see if it actually works. So the first thing it does is create a certificate, insert it into the trust store for the cluster, and then connect to it. Once it's done that now, it's talking to SEPH to create that storage pool, adds it to all of the cluster nodes, and we can see it shows now that a remote storage pool has been created with nothing using it. Then it created the uplink network for oven, so that's the external network, that's the subnet that oven can get public IP addresses from for external connectivity, and then created the oven networks, which you can see now it's default demo one, demo two, demo three, and then it goes on to create instances. So it created the uplink instances pretty quickly for the local ones, and now for remote, it had to create SEPH block devices and pack the image in there, create instances from there. And now it's doing an Ubuntu image as well, so it's downloading the image and packing it again, I think in this case, into the local storage, and now it's gonna unpack it again into SEPH. And then we should be done as far as creating instances. It's gonna get us the Grafana password at the end, so that we can go look at that. But yeah, then we'll be up and running with a bunch of instances. I realized that this is, I mean, this is a step towards something that's nice and convenient and user friendly. It is not quite what we want our final solution to be. We do have something better coming up, but this works. And so now we're done, and if I switch my client over to that remote, which I believe, oh, it's called demo, I think. Yeah. So if I do the next list, I should be talking to it. Believe it's called demo. I'm hoping it's called demo. Do I have to look at the script again to see what I actually did call it? This is taking way too long, it's probably wrong. So let me go look at remote add. Okay, the remote is actually called digital cluster, so that wasn't gonna work. Switch to the right thing, and there we go. All right, so now we've got those three Alpine instances here. Well, the first three are on local storage, the next three are stored on surf, and then you've got open to local instances, remote instances, and if we go look at our cluster, see we've got five machines. The first, well, they're all in a database, but the first is the leader, they're two are standby, and the remaining two don't even get to vote, they effectively get a stream of the database events, and should one of the other go down, they can quickly be promoted to a standby or the leader, and that all works completely automatically. Now, let's work a bit of havoc. So say I want to do some maintenance on the first server. I can do lexic cluster evacuate on server one, say yes, and what lexity is gonna do now is for the instances with local storage, it needs to stop them, move their storage across, stop them back up on something else. For instances with remote storage, it doesn't need to do, it will move things across, because it just needs to start it back up somewhere else. So now we've done that. If we go look at list, and we ask it to filter for everything that's on the server one, there's nothing. Now you could do your maintenance, and once you're done, you just do restore, SRV one, and it will again stop the instances wherever they ended up, move them back where they're supposed to be, start them back up. That's what's going on there. When using, in case I'm just using containers, because it's faster, but if you're using virtual machines, you can actually do live migration as well in this case so that you don't actually stop the workload. And there we are, they're back on SRV zero one. Now, let's see what we've got in SRV zero two. Okay, unidocal, so that's not good. SRV zero three has a remote one, so that's good. Now, SRV zero three is 1048, 79, 68. If I put the dot here, there, there we go. So, let's cause that machine to die. Just hoping it dies quickly, otherwise I'm gonna have to kill it harder. That's really stopped, okay, cool. So, this one is gone now, which means that if we look at our instances on SRV zero three, well, it's gonna have a bad time. It's in error state because, well, that machine is gone, but the instance was not moved somewhere else. So, that's what would happen if that machine was to lose power. But because that instance is stored on SEF, we can just move, target, SRV zero two, and start it back up. So, it was just being moved, started back up somewhere else, didn't lose any data, it was stored on SEF, we're all good. It's network is oven, so it's gonna come back with the exact same IP address, and we're good. That's actually what would happen if a machine was to actually fail. We don't have support for like automatically moving things when it happens. We could do it, we didn't want to because you always have the issue of like, is that machine properly dead or just mostly dead? And if it comes back, then what happens? If you've got that disk open in two different places, that can cause corruption. So, we're trying not to automatically do it, but it's very easy to detect when a machine is dead and it's very easy to recover from it. And as I said, we do have a Grafana dashboard as well, so let me go ahead and switch to this one. And I believe it's there. Let's refresh this thing. There we go. So, that's the Grafana dashboard that's automatically set up by LexD when connected to Grafana. And in this case, it shows you the top usage for memory, for disk, and for a bunch of other things. The slight issue I've got is that none of those instances are using any CPU right now, so there's actually nothing in that graph. If they were using CPU, then they would actually show up. Not sure what it exited the presenter mode when I did that. Let's take that back and going here, there we go. And, all right. So, what's next? Well, as I said, as you probably also could figure out from the way I was speaking, I even thought, so the manual approach works, but there's a bit of a pain to do. The Jujube approach also works, but it's currently not ideal for an actual self-contained three-machine type thing. What we're working on now is on making both self-enraven as easy to cluster together as what I showed you with LexD. So, with the same kind of idea of you bootstrap on one, you join the others, the roles just dynamically arrange, and you've got an HE cluster. We're gonna do that on both self-enraven and we're coming up with then, effectively an appliance type image that includes LexD, self-enraven, all done that way, all tied together so that you can boot one. Like there are ideas that you would get five of those systems, either pre-image somewhere or you just image them yourselves. You plug them all in a switch, you start them back up, you get a shell on the first one. And you do a bootstrap there and then it can find all of the others in the network. You just say, yeah, those machines are the right ones that matches my serial numbers or whatever we show. Adopt them all. And you end up with self-enraven, LexD, all deployed, cluster together, and you're ready to go. You can show better machines, containers added. That's work we've got in progress. As I said, we've done it with LexD before and that's what I showed you in the earlier demo. We're very close to having it done for self now and actually have submitted another talk at the Open Source Summit in Europe. If that gets approved, I will show you the steps thing there and we should have the dual solution sorted, we believe by the end of this year, which we're really, really excited about because whether you want to run this at home on a few Raspberry Pi or you want to run this in a data center with in a collocation facility like I'm doing or you are a telco wanting to run this in like hundreds of thousands of small locations all across the country or your retail store that wants to run that in your back office to run the point of sale, surveillance, whatever. It's going to be a very, very good solution for that. It's going to be extremely reproducible. There's no going to be like random packages being installed and stuff. It's like three effectively read-only images for each of the bits that just get installed is stateless effectively. Everything is highly available. All three of the main components are you can lose one of those systems and all you really need to do is okay, find, reshuffle my stuff and then drop, ship a replacement, have someone plug it, you're done. And that's pretty much the end game there. That's it for this talk. If you do have any questions, I'm going to be around but I also don't want to stand in your way because it's the lunch break coming up. So feel free to reach me out on those, any of those contact details down there. I also have put links to each of the different projects there as well as things like our YouTube channel that has the videos and how to assemble this stuff yourself and make sure you want to do that. Thank you very much.