 I'm surprised to see so many faces in here. The theme is more application oriented. When I was preparing the presentation, I thought I should apply for the least technical content of all the conference talks award. So we're not going to dig deep into Ansible Playbooks, programming details and everything, but I'll give you an outline of the architecture we developed and the reasoning behind it and our experience with it. The agenda is not the table of contents. So I don't have any slides with all of this as a title. This gives you a rough outline of what I'd like to talk about. And this is introduction of our company and what we do, the challenges we face in hosting our first attempt at solving these challenges and then why we changed from an NOVSD setup to jail-based architecture, what we do now in our data center and what we like to do in the future. I myself have been working in IT since 1986 and my UNIX endeavors started with discovering MENIX in 1989. And ever since I took on FreeBSD in 93, I haven't looked back and never regretted it. I'm currently in charge of network and data center operations at punk.de. And I'm proud member of the team MOPS, the magnificent operators. We have three guys who are originally operator type people and one fresh colleague who is a developer. The fun part is that MOPS to all the German speakers means a pug, that smallish kind of dog. So we have an unofficial team logo that looks like this. Does that mean that we do DevOps now? Well, yes and no. I think DevOps must be the most misused IT term of the last year or two. When managers say DevOps, they mean one or two things most of the time. Either they mean no ops. So the developers do the operator's job. But there is a reason why people like me are still around and why it's a profession to manage a data center. Or they mean infrastructure as code and just slap the DevOps label on it. And I'm fine with that because this is what we're trying to do and what I'm going to show you what we do. This is the agenda of our entire team for the last couple of years. Team.de, the company was founded in 1996, started as an ISP. Well, internet these lines are a commodity nowadays, even for companies. So now we're hosting web applications and we have two development teams for hosting roughly 100 servers. We are a right member, a D-NIC member, all the necessary requirements. As I said, three teams and all in all about 30 people. When hosting, as I see it, are availability, performance, cost, and manageability. And manageability is often underestimated, but it's the key point in my opinion that these sites about the scalability of your entire data center. How many people do you need to manage a couple of servers and how far can you scale? We now have 100 at the current location. We can place about 300, but what if we want to scale to a thousand or even more? Hopefully, we will get there one day and we don't necessarily want to increase the number of employees tenfold. Okay, one challenge when it comes to server management. How do you do updates? Well, I never do updates because I never change a running system, right? And how do you do a backup? I mean, nobody wants backup. Everybody just wants restore, I don't get it. So, as our first try to get a better management platform for servers, we tried to tackle the individual machine and we used the NanoBSD-based setup. NanoBSD was developed by Paul Henning Kump for embedded systems. And the servers we used had two hard disks. You may know the device names from FreeBSD. And what we did was we put a John Mirror on top of that on those hard disks. You can see now it even gets technical. And on this mirror, we put a couple of partitions. And these partitions are part of the NanoBSD architecture. We have two slices that are for the operating systems and all the installed packages. So, this is the entire root file system over here. And we have an alternate slice with more or less the identical software that is not active. And you can update the system by simply using DD to copy a pre-built image to the inactive slice, reboot into the other one. You can easily roll back and so on. The persistent customer data goes into the third slice, as goes the persistent conflict data that you need. And the magic of NanoBSD will copy it over to ETC at boot time. Works quite well. Advantages. Core OS and packages are all read-only. So, there's one less thing to worry about. Even if somebody would have root access, he cannot build, say, a Trojan into SSHD or everything, a keylogger, stuff like that. The partition is not modifiable. If you raise the secure level of your system, you cannot remount it at runtime. So, you're secure in that regard. You get atomic updates. So, hopefully, if your build procedure is okay, you have everything in one consistent state fitting together. So, no missing libraries and everything. You get one system that you know will work. You can roll back by simply activating the previous active partition. Sometimes that's not quite true. For example, when you upgrade from FreeBSD7 to FreeBSD8, the metadata for the JomMirror module gets updated on disk. So, you cannot go back. But most of the time, you can. And if you do it right, you have identical software for all your servers. Yeah. But there are also some drawbacks. The first one was HomeMate. We did not, at the time, automate image creation. So, it was still a manual process, installing the packages from ports into the new NanoBSD image. Sometimes people made mistakes. We had inconsistent software versions, stuff like that. Then, crossed by the architecture, we need a reboot of the entire machine for each update. And we cannot easily install additional software after a machine has been provisioned with a certain set of software. Without going through all the process, building new image and everything. And sometimes a customer calls in and says, well, I need PHP module two. So, how do you go about that? Yeah. And we have one PHP, one MySQL version and everything for the entire machine. So, if you have a shared environment, like a patchy V-host or whatever, you cannot have different PHP versions for different customers on that very same system. We addressed some of these. With the knowledge we acquired during the last couple of years about infrastructure as code and everything, we dug deep into vagrant. So, now NanoBSD image creation for our legacy servers is a simple vagrant up. And then an up-to-date image will come out of that after three or four hours coffee break. No problem. You can run it on a server, trigger it by Jenkins, whatever you go. The entire continuous delivery way, it works really great. And we build our own putria to build our own packages, which is an absolutely fantastic tool. One of the cornerstones when people ask me why FreeBSD, I always say they give you infrastructure, not product, and putria is definitely one of those infrastructure things. So, what do we want from the new architecture? We want an even better isolation of customers on the same machine, something virtual machine like. We want individual configuration per customer. And we want a couple of instances on one piece of hardware, faster updates, and everything fully automated. Not that surprising altogether. That has been the talk of the last years in data center management and hosting and architecture. So, we have two guys that are currently, I think, the big kahunas here. We're into the open stack summit in Atlanta. So, a private cloud with all hypervisor-based architecture seems to be the thing to do. Maybe, or that one. I must admit, they have a cute mascot. So, it's all Docker. You build it, you run it, containers. Yeah, 100% buzzword compliance, but didn't fit us that well. So, why not? Why not a hypervisor? One VM per customer. We decided against hypervisors because it would increase our workload, multi-fold, because each VM is a separate system. You can easily dig into the VM's file system from the outside to do updates of 100 VMs at the same time. You have to treat each VM as a separate host, even if you go automating stuff with Ansible or Chef.io or everything. Essentially, on this, wait a minute, let's go back. Here, we have 16 machines to manage, although it's only four pieces of physical hardware. Then, you have a little bit of overhead, not that much, given how powerful today's machines are, that's not really a problem, but you cannot smoothly over-provision, like in a symmetric multiprocessing environment, memory and CPU in a hypervisor, but you have fixed resource sets for each VM, and then which virtualization technology to pick. When VMware came up to be the big guys, I've been talking to them at every CBIT fair, year after year, and I asked them, okay, please show me, you always tell me, you can save so much money by employing VMware. Okay, just do the math for me, you're the sales guys, you do it, and they just couldn't, because their math depends on lots of servers that are running Windows-like operating systems and are mostly idle, like in enterprises. Of course, you have your Active Directory, you have your database server, you have your exchange server, and if you can fit all of them on one machine, then you're saving, of course. For a data center where rent a dedicated piece of hardware to a customer, and the customer wants this precise amount of CPU, RAM, and everything that doesn't scale if you count the cost of VMware licenses on top, plus hardware that is 10 times as powerful as a standard one-unit server with a single socket or two sockets, is most of the time more than 10 times as expensive. So, not that much scalability in our case. And storage is either fast or cheap. Yeah, I didn't forget reliable, but reliable is not an option, I mean, you're leaving out reliable, you don't want that, don't you? Okay, so, what about that container guy that's cute blue whale? I must admit they have a cute logo. Well, no. At least that's what even Docker prepronents and people working in the field keep telling me. They say, if you want to SSH into a container, you're doing it wrong. It's all about orchestrating containers from the outside and you don't do that. Okay, why don't you do it? Because it's supposed to work like this. What you have to look at virtualization technologies with a question, what precisely does the technology actually virtualize? So, if you have a hypervisor, it's like IBM VM back in the 70s. It virtualizes the entire machine. You get N machines instead of one and you put an overrating system kernel, bootloader. Hey, VMware even has a BIOS setup for the machines that you can get into. You're virtualizing machines and you can run arbitrary kernels, that's the plus sign. You can run Windows, you can run Linux, you can run free BSD. Okay, when we have jails, which is much more lightweight, we essentially virtualize slash aspin slash init. So, a jail is just a file system tree starting at some top level directory and then we can bootstrap an entire operating system with everything but the kernel. So, we start at another init process and then go all the way from ETC RC all the way down until all the services are started. And Docker actually aims to just virtualize a single process. Just one thing, an Nginx server, MySQL database, MariaDB, Elasticsearch, what have you. And it happens to be that our customers are of the style that they want the full stack on one machine with persistent storage and they can SSH into it. But we as operators don't want to give them their hypervisor based fully emulated virtual machine because that's too expensive. So, we meet precisely here in the jails. This is a rough summary. They want the semantics of a VM, they want to feel alone on their system and we want fast provisioning, easy updates and low cost. Okay, jails are at an advantage here because they look like a VM to the customer, they have low overhead. Contrary to Docker, they don't require a separate server process and now it's getting really beautiful because from the view of the host system all the processes inside the jails are just regular system processes and all the jails file systems are just regular system file systems. So, you can with local file system semantics always touch and twiddle and tweak all the stuff that is in the jails from within the host system which you cannot do either with Docker nor with a VM. So, and the last innovation is virtualized network stack which is absolutely great. And if you know your stuff, you can do whatever you like. I'll get into the architecture a little bit later. This virtualized network stack V image in FreeBSD introduces the ePair interface. ePair is essentially a virtual patch cable and one end of the cable happens to be inside the jail as an interface that you can if config, up, down, IP address, IPv6, everything. And the other end happens to be an interface in the host system and every packet that the jail process is right into that interface happens to come out at the host side and the other way around. It's got its own MAC address. So, you can bridge, route and network address translated to your heart's content. We use bridges to connect those interfaces. I'll show you the architecture. We have a couple of customer jails. The ePair interface is called VNet zero in all of the jails. And then we have a bridge interface which is connected to the physical interface of the host machine which is connected to the wire. And the beauty is you don't need to do it that way. Instead of the physical interface, you can use a cloned loopback interface instance, for example, assign a private ePair address to that and then use your host system's routing and netting capability to do whatever you like. You can create multiple of these in every jail. You can connect them to VLAN tag subinterfaces of this. You can use a jail as a router and whatever you desire. It just works. And if you know networking basics, it's really, really, really straightforward. Okay, the shameless marketing plug. We called the new product the pro server. This is how we address the customers. What is so by developers, for developers with that product? It is that we try to put all the technologies in there that modern web applications need for a certain set of applications. So we're still PHP based. Some people would challenge the modern when it comes to PHP. But we put elastic search lockstash, MariahDB instead of MySQL, EngineX and Apache and everything that you might want if you are a PHP developer in there. So the customer gets a full set of features and doesn't have to install that themselves or care about it. That update thing again. It's a managed platform that combines the power of a managed platform with root access. So configuration-wise, the customers can do whatever they want, but we take care of all the software in an automated fashion. Okay, customers get either a virtual pro server. This is a jail instance. All they can render dedicated pro server, which is a virtual instance jail host. And they can put as many jails on that as they desire. And for us, it's all the same technology. So it's easily manageable. Our current virtual pro server host is all SSD based. All ZFS, of course, 256 gigs of RAM, 20 cores, 40 threads. And currently we have one machine with 50 jails and it's just twiddling. Could do more. So how do we manage this zoo, actually? The stack has gone quite big. The jail is of course, the most important element of abstraction here. And we have looked into jail managers. There has been easy jail and free BSC pretty early. Then there was the warden jail management tool, which is part of Freeness, still in Freeness 11 and the new Freeness jail manager, which is called IOcage, currently rewritten in Python. And we are helping a little bit and are actively involved in the development of that Python IOcage by Brandon Schneider. It's a great tool. And if you like, check out the GitHub page. These things are supposed to help management of jails. So how do they do this? All of these, I'm not quite sure about easy jail, but warden and IOcage have the concept of a template jail. So you create a jail with a certain free BSD distribution, say 11.1, and you install all the software you need, PHP, MySQL, all the same stuff again, into the template jail, and then you want to actually do something with it. So you instantiate it for a certain customer or a certain application. And by default, in warden as well as in IOcage, this happens this way. You'll create a snapshot of the ZFS dataset that contains the template jail and then you clone the snapshot. Problem, these are copy-on-write clones of a ZFS snapshot and ZFS snapshot is immutable. So again, we're facing this, one does not simply update how, how am I going to update these things? What I as an operator want is not three instances that I all need to address separately. I want to update this, which I can do, but it won't propagate into these instances because these instances depend on the snapshot. Okay, question so far? Good, so what we came up with is something we call blueprint jails. We made up the term blueprint jail for that because template is already a keyword in IOcage and we wanted to avoid confusion. So in my first sentence, this is not IOcage templates. We just create a regular jail with a free BSD release, install all the software that we need, the packages come from our own putriere and all of this is automatically done with Ansible. And after initial creation, we shut down the template jail and never touch it again. And then we come to an instance jail, an instance jail for a customer application is an empty jail in IOcage, IOcage can do that. And then we mount the blueprint jail on top read-only with Nulloves, that's a local mount that preserves the full POSIX file system semantics and all the read-write directories that a customer's application might need, slash ETC and so on, our separate ZFS data sets that we mount read-write on top of that. All the ZFS mount points are set to legacy and IOcage has got an FS tab feature by which it automatically mounts a certain set of directories and file system at Jail Startup. Okay, okay, yeah. Nope, nope, nope. Okay, so file system layout is like this, we have an empty base jail. This instance is Vpro0048, that's just the pro server product that we manage at this point. Then we mount our template jail on top, the template jails are blueprint jails, are named like the quarter of the parts collection that we use, which PHP version is in there, which Elasticsearch version is in there, so we can have different configuration that we apply to different customer jails. And then we mount all the writable directories on top of that. And this mount is a Nulloves mount and it's read-only and the other mount are standard ZFS mounts and they are read-write. The FS tab looks like this, shortened it a bit, so I could use a larger font and I'm not quite sure if FS tab would accept continuation markers, but you get the idea of what I'm trying to express here. We have some interesting things. This is one of the, on one of the big machines with shared or virtual pro server products. So we have up to 50 currently instances of the jails on this thing. So you can see that we have, for the read-write mounts, we have one Z pool here that is called Zdata. This is a RAID Z2 and we have another one that is Zroot. That is a mirror and all these database directories here, this is Vardy B MySQL, are specifically tuned for database operations so the block size is different, the metadata cache is different and that stuff. And another thing, you have the blueprint jail read-only mount on the very top and then we mount, for example, the ETC and the user local ETC read-write on top of that and then again, we mount the RC.D directory, for example, read only on top of that one. So the customer always gets the startup scripts that match the packages that are installed. So we do this for startup scripts and for the package database. So package info inside the customer's jails gives a consistent output and everything works. Nope. Our customers choose this product because they don't want to. We do it. Okay, provisioning of the entire system works like this. The pro server host is installed via Pixie boot. We have managed to get this completely unattended including ZFS and everything up to a point where you can manage the system with Ansible. The first version of that, install chef.io, client and register at the chef server. We've switched to Ansible for various reasons and we can install the hardware completely automatically. The blueprint jail is installed with Ansible using the packages from our Pudrier and the instance jail is again, provisioned with Ansible and that's about that. So how do we update a customer's instance now? The great thing about jails is that a jail need not be running to accesses from the host system and even to do things inside the jail. I cannot point this out intensely enough to people who are not familiar with FreeBSD. It's just an incredible feature in my opinion. So the jail is just a file system tree that's lying somewhere on your hard disk and if you have workingresolve.conf inside the jail, the jail need not be running. You can just change route into it, use the host networking and do a package upgrade and I don't know any other container technology that would do this. Well, we don't do that. Why? Because of that buzzword here, immutable infrastructure as one of those rollback things. So once the blueprint jails are created and provisioned, we don't change them again. But if we want to do updates, we create a new one and then we update all the instances that depend on them because we just have to iterate over all the customer jails, shut down, change mount points, boot. That takes about 30 seconds. So customer gets 30 second interruption of service and has all the latest security fixes. Okay, backups. We have ZFS. Easy, we do snapshots. You can do them hourly, you can do them daily. There's the sysutil, sysutil ZFS tools port that contains a ZFS auto snapshot utility that you can run from Chrome. And you just tell it how many snapshots you want to retain and it works similarly to Time Machine. So you can say, okay, hourly for 24 hours, daily for seven days, weekly for one month and everything. And you have all those snapshots. So on the local system, the backup for help, I need to rollback something I did to my application. That case of restore is solved. And for disaster recovery, we copy these snapshots to a different system, to a larger backup server with just lots of storage. We plan to have one of those per rack to have them physically near and have faster network connections and distribute the load a bit. I have found a tool on GitHub that is called ZFS backup from Solaris. It hasn't been updated for two or three years or so, but it works quite well. It's a shell script. I liked it. We have built a port out of that, but that still needs a little bit of polishing because before I can submit it for inclusion in the port street. But I will do it, promise. Works for us now. So this was the overview of our architecture and how we run our hosting. So after we tackled some of the problems we have, what's left open and what is left to do, we essentially replaced the VM stuff that is difficult to manage with Jails and we developed the tools that make them easy to manage. What we of course would like to have is some sort of central storage for all of this. Central distributed, I'm not quite sure, without introducing a new single point of failure. I submitted a birds of a feather session that is starting after this talk all the way upstairs and I would love if anyone interested in high availability storage on FreeBSD would just come to discuss concepts and everything. I must admit that I don't have a plan or ready to go solution, but I have some ideas and I'd just like to discuss it. See, just a sec. So another point would be if we could offer self-provisioning of customers. So like a true cloud solution, they could just click and get their jail instance specified to their needs. That would be nice and it would essentially give us a private cloud solution. We are planning to go that way slowly given the resources we have with four people. First thing we want to implement is an API. For all that Ansible code, we have possibly a rest, possibly something different and when the API is done, then the front end can be implemented by anyone. But as I learned today, possibly something like this already exists. There are some people at the booth upstairs from, oh, that was a different difficult name, XTinfinity and they claim to have a complete open stack like private cloud infrastructure based on previously and I'm definitely going to check that out. And I mean open stack like, not open stack compatible or anything. So they're doing everything on ZFS, jails and they say they have a control panel, they have distributed storage and I'm really curious how far ahead of us they are or if they are essentially not so much far ahead. I don't know, I really don't have a clue at the moment. So now for that one. Yeah, we use the resource limits that were implemented for jails not so long ago and we apply them again with Ansible. Okay, so the question was how do we control resources? Do we give every customer all the 20 cores and all the 40 threads? I said no, we just use the resource limits which our experience shows that for CPU cores they work quite well for memory not so much and I have to look a little bit deeper into this, how it's actually implemented and how it works because I figure it might be a hard problem to do something like this with Jail technology. Any clue Kirk, how is it, how it is done? Resource limits for jails? So now there was a few comments in the room that what you've done is re-implemented some of the EZJL features into IOCage. Have you done it in such a way that you could commit this upstream to IOCage so that IOCage can catch up with EZJL on this front? Well, we send all our changes to IOCage to Brandon for inclusion, which he did, but we're actually not really implementing EZJL in IOCage, we just create a custom FSTub for IOCage to use to, as I now learned, re-implement something similar to EZJL. So actually there is nothing to open source. If you take the current IOCage and give it an FSTub file like the one I showed you, there you go. Question from me, when we investigated jail management, which was, well, quite some time ago, it looked like EZJL was more or less a dead-end project. And then IAC system started to sponsor re-implementation of IOCage, so that's why we put our bets on that. So when did EZJL wake up again and get... So the honest answer is I think there's still ongoing effort in FreeBSD to revamp the way that you run and configure jails. And this is not a done thing, so that's where you're stuck as well, I guess. Thanks. One last thing, FreeBSD use jail.conf and EZJL doesn't support that. There were a set of patches to work with that, but they were never committed. Yeah, thanks for the talk. I have two questions. First of all, those resource control things, do they cause any trouble with Java? Because I knew those existed because Java assumes that you are running, like you have 200 gigas, but you actually have like two. And it causes problems when it tries to allocate at all. And the second thing, have you considered using UnionFS for defaults and let users alter their configuration if they want to? Thanks. Okay, so second question first. I've pointed out at another occasion that as an operator, I want UnionFS like semantics instead of copy on write clones. So I understand your question. The fact is that currently UnionFS is simply broken and not being worked upon much as far as I get it. And it seems to be a problem that is not, that all easy to solve for the general case because of what happens when you delete things in the upper layers and how they are propagated down. So possibly this will never see the day of like the light of day. So first question. Yes, that's why I made that remark about memory management and resource limits. As I said, it works really well for a CPU course, but we had Java processes run astray and consume definitely more memory than they were entitled to. And we had to fix this on the application level. Just contact the customer and politely tell them to limit their elastic search memory and stuff like that. So two things. First one is you might not be aware that I think Steve Wills is working on a port of the now open sourced Ansible Tower or AWX to FreeBSD and that would provide you with a REST API to put your UI sugar on top. So that's probably worth a look. The other question is how do you deal with things like slash dev slash log in your jails? And yeah, one first. Well, since the finished product is a managed root server, all the jails have their own private logging because the customer is root with the exception that he cannot install his own software, which we manage for him, which is what people seem to appreciate. So that's the dev log thing. We haven't looked into tower yet. If you're interested, we started our data center automation with Chef.io and we ran a one-quarter lung project where we very, very intensively work with developers and operators and implemented a really huge, now I must admit over-engineered system of managing Chef recipes and cookbooks. We had unit tests, we had integration tests, we had service-backed tests, we provisioned them and pushed them into the Chef server with Jenkins and then finally loaded them down to the managed hosts. Then we had this version pinning thing with a Berks file and everything you can imagine and it was just definitely over-engineered. So now with Ansible, we go with a more lean approach. There are actually still hard-coded constants, things that apply only to our data center. So it doesn't make sense to open-source that part because nobody else would be able to work with us together on the project. I'm very willing and we're very open to share anything we actually created as far as knowledge is concerned. And our approach to the Ansible stuff is that we are continuously refactoring the entire thing every couple of months anyway. So it will probably never be a finished product or something that is usable in the general case. How do you deal with the security risks of accessing data and things within a jail from outside the jail? For example, if you have a sim link inside the jail and you look at it from outside and it's absolute, it points to something else. We use IOH console to change into the jail most of the time. Okay. And when we don't, we hopefully know what we're doing. Yeah, I had a question. How, so what benefits does this give you to access the jails from the outside? It's a bit of a trick question because we're doing the same ourselves, but I would like to hear your ideas first. To me, it's not so much accessing jails from the outside. It's the fact that it's actually local file system mounts. So you can share, for example, the same file system or dataset among multiple jails and all of them have local semantics. So you can have a Unix domain socket on them like MySQL.soc or PHPFPM.soc. So you can isolate the database, the web server or PHPFPM. You can have two jails running different versions of PHPFPM and both have mounted the same customer files and PHP application dataset. So the customer can try his application with different versions of PHP and all that stuff. The single opportunity where I really go into a jail and access it from the outside is, as I said, when I want to do updates or a quick modification, add a package that will of course later go back into the Ansible code. But if I need to manually add a package to one of our blueprint jails and the blueprint jails is of course not running while it's mounted into all those customer instances, then I can just change route into it and package install something. I could even use without the change route package install and give it a destination path and everything. So that's about the only application we have, but the general architecture is local file system mounts beat everything in my opinion and you can do so many fancy things with that that I don't want to miss it. Now what are you doing with them? Okay, forgot that one. Of course we run the stock security scripts and daily scripts as well as our own ones only on the host system and not in the jails. I also use Ansible and jails, specifically easy jail and IOCage. I like the idea that you're putting the local system data on separate file sets. I like that idea because I like Zeta-Vest and I like to put anything that you may ever want to use separately in a different dataset and I'm tempted to do that for my own jails at home which I'm updating myself. I also like the approach of never updating in place but just creating a new jail and I may wind up doing that, say for Postgres. I have a PGO1 jail which hosts use at the moment and instead of upgrading that in place I may create PGO2, update everything in there and then just swap everything over. That's not great for high availability because there's gonna be downtime but hey, it's at home but a few good ideas on what you gave me there. No questions, just thanks. Great, glad you liked it. Do you have any idea how much time you're saving now during updates because now you just have to shut them down, flip that path and bring them back up. Is it saving you time already? Yes, definitely. I can't tell you how much time or money precisely but definitely updating those NanoBSD servers despite the fact that it was sort of atomic and NanoBSD and everything was always a hassle and we dreaded the update days and we've changed to a monthly schedule of doing updates whether there are security updates or not. We just do it monthly to keep our customer base educated sort of and in case of an emergency, of course we do them right away and we don't have a problem to update all the systems we do it in two, three hours. All the jail-based ones, that is. Yes, yes, yes, the interruption per jail is in the order of 30 seconds to a minute or even less depending on the amount of services the customers are running and how much time they need to write out their volatil data and everything. So we've used HA Proxy to put stuff behind and actually end up with seamless updates as a result. So we have an active jail that's performing service. We have the new upgraded one in place ready to go and then just set the back end of HA Proxy for the old jail to maintenance and then it just automatically traffic to switch us over to the new one and I don't know if that works for your customers but certainly for us that makes upgrades embarrassing and easy. So much that we have our databases behind it now, our message queue behind it, our APIs to external, so third-party APIs as well and it's really, really nice. So you're doing this for centralized services like a centralized database server or a couple of them and centralized message queue server or a couple of them, not one instance per customer. Or is it one? Yeah, so we're not a hosted service. It's a, we're running our own business and we have multiple databases, clustered databases, clustered message queue and they are accessed from HA Proxy. So it looks like one externally but we have them on multiple nodes. So for example, this jail host here receives traffic to its local database, unless that's down for maintenance in which case it just goes to the nearest one. Yeah, yeah, yeah. That's definitely the way to go. We don't do it for these customer jails because we have a plethora of independent customers who all run their completely independent instance with a full stack, so that wouldn't scale too well. We have not found a way to do that yet. Hi, what happens if a customer is not okay with a down time of 30 seconds? What do you do then? Or is it just your business model just saying this is how it's done and you have to accept it? He can book an additional jail that is not built as highly because it's inactive most of the time, come up with his agency who does the software for him or with us, with the replication mechanism and then we switch over to the inactive one, update the active one, there you go. Or have two jails the same size and simply switch over, of course. Okay, so no question. If you made that presentation two months ago, I would tell you that you were wrong about the SageRoot command, but now Solaris is dead, you are right. Since you managed to fail over a situation where you have a separate jail where you replicate and stuff, how do you handle host updates, the physical machines? Do you have the same mechanism somehow or do you say it's part of the business model? The host runs very few applications on its own, so it's mostly the base operating system and we schedule a maintenance window and inform the customers in that case. For several of them who have contracts that match that, we actually switch to jails on a different host, but not for all of them. Okay, thank you.