 Next, we have Deploying OpenStack Using Containers by Angus Lays. Just to be very clear, this talk is not talking about exposing containers to users in any way. It's not about containers as a service. It's not about using the Nova Docker plugin or any of those sorts of things. This is about using containers to deploy OpenStack itself. And the fact that you have maybe chosen to do that should not in any way be visible to users of your OpenStack install. It's just a deployment choice. What it is a talk about is I'm going to start off very quickly talking about traditional deployments and then why some of the limitations with that and why you might want to consider using containers and look at one example of that. And then going to talk about some limitations with that and why you might want to look at something like Kubernetes as an extra layer on top of all of that. I'm going to talk very quickly about containers and Kubernetes along the way. I'm assuming you don't know what they are. But of course, I'm not going to cover them in enough depth that you can never have to learn anything else about them. But if you have questions asked, I can always say more. So I'm glad I'm on the big screen here because these diagrams would be quite hard to see otherwise. But now they're five meters tall, so that helps. The traditional, the classic deployment model and this diagram is from the OpenStack Operations Guide at the beginning there. The traditional deployment has a couple of control nodes. Let's say three. It has a MySQL server. So it has a couple of network nodes if you're running Neutron and then it has a larger number of compute nodes. They were running on bare metal and typically you would install it using PuppetChef, Ansible, anything sort of in that class because you don't want to type it all in by hand every time. So that's the traditional model. The other sort of common traditional one is the all-in-one tests and Swift has an all-in-one install and of course DevStack that everyone uses. These are pretty much all running on one machine. They have a bunch of QMU sort of virtual machines that they run, but they're very, very different to a production deployment. They typically refer to the other services using localhost. There's only one of every piece. There's no ability there to deal with dynamic changes or discovery of other components because they're all assumed to be hard-coded to be running on localhost. And even more so if you look at a few services and the Neutron ML2 driver is a good example. The usual DevStack setup uses a special local network type which only works on localhost and is of course nothing you would ever use in a real production deployment. And so we're exercising code that isn't even normal code in a DevStack setup. So the issues with these are certainly the DevStack install can't scale up. It's really built and hard-coded to deal with the localhost case and running multi-node is a little bit of a stretch. It works for DevStack, but it's a little bit weird. And the typical prod setup where you have whole machines dedicated to particular roles can't really scale down. You sort of need a network node. You need at least one controller. They're typically set up to have MySQL on a different machine again. And you can combine some of these, but the scaling unit is the host and it's a bundle of behavior on that host. And if you want more control nodes, you duplicate that and have a whole other control node that has the same defined set of services. So one way to make that more flexible is to use containers. What is a container? Lots of people use this word without knowing what it means. Lots of people wonder what it means. Containers really are the combination of two Linux kernel features. They are namespaces, which is a feature that gives every... You put every Linux process in a namespace. Namespace could be a group of processors. And you say, okay, you get your own process ID namespace, which means your process ID one is not the same as someone else's process ID one. Okay, so you can run your own unit and it's different to the real system in it or different to different containers in it. And if you type PS inside there, you only see your own processes. You don't see the process list outside. You get your own file system namespace. You know, your ID or a slash and what you see and what you mount doesn't affect what happens in other containers. Likewise, you're networking. If you set up a routing table or configure a network interface and each zero, it's a different eight zero to what some other container has. But it's all on one kernel. So the kernel has several network interfaces and they're just called namespace one, eight zero, namespace two, eight zero, namespace three, zero. So they're all there. They're all in the same container. They're all in the same kernel, sorry. But if you look at them from your container, you can only see the ones that were configured for your container. And when the kernel acts on your network packets or your file system opens or your kill process ID syscalls, it only acts on the ones that are configured for your container. The other feature is C groups. C groups are a way of limiting the resources available to a group of processes. So you can say, I know I have eight CPUs on my machine, but you group of processes can only use four of them. Or I know I have several gig of RAM, but you can only use one gig of RAM. It's a little bit like Ulimits, except more powerful because it extends all the way down into the kernel. So if you're limiting RAM, for example, it's not just the RAM that the process uses. It's also the shared pages that it uses from shared libraries. It's also the kernel buffer RAM that is used on behalf of those processes for TCP sockets and for file IO and those sorts of things. So it's a much more comprehensive version of Ulimit, if you like. So containers are the combination of these two things. So we can create a group of processes. We can give it a unique view of the system. And then we can limit it. We can give it a subset of the full hardware we're running on. And that's really all it is. There's a bunch of, to give the distinction here in terms of features, it's a very different implementation. But if you'd like to think of it comparing it to virtual machines, with virtual machines, you can run different OSs because you're working at the machine sort of abstraction layer so I can run Windows on my Linux guest. In a container, you've only got one kernel. So it has to be Linux. And in fact, it has to be the same Linux kernel that you're all using in other containers. In a virtual machine, because you have a full system, you need to run a full operating system. You need an SSHD. You need an in-it process. You need all these bits and pieces. You need hardware discovery as part of your boot up. You need Udev running to respond to whatever hot-plugging and device discovery. You need all these pieces because you're basically running a machine. When containers, you can assume that one of the other containers is doing all that bit. The main sort of system startup has already handled all that for you. And so typically in a container, you'll have just the processes that are required. So if you want to run Apache in the container, you will just have Apache running. You won't have SSHD. You won't have in-it. You won't have system D. You won't have Udev. You won't have any of those pieces because booting up the hardware is all handled for you by the system sort of container, by the outside environment. And typically with containers, if you want to do debugging, you'll log into the host and you'll poke at the container from outside. So usually you won't log into the container itself to do debugging. So think of the container much more like a charoute or a forked process group, which brings you to security as well. Typically containers are the same tenant. You don't have hostile containers next to each other. Containers are very new sort of kernel features, particularly the namespacing stuff. It's a very large attack surface. We're talking all of the kernel. We're modifying a bunch of behaviors that have been in the kernel for a long time, but we're modifying them in a way that is new. The inside of container is now not the same as root in a different container. And that's a whole lot of new checks in the kernels that we need to make sure we've got right. And almost certainly there's at least one bug waiting in there that hasn't been discovered yet. You would be very, very brave to the point of foolishness to have multi-tenant containers at this point. Multi-tenant hostiles sort of tenant containers. Whereas VMs on the other hand, that's a pretty well understood problem space. You know when you're writing the virtual machine hypervisor, and you're receiving that operation to read that file off disk. That was recently written code because hypervisors are relatively new technologies, and it was written with the idea that perhaps this is a hostile untrusted guest, and that when I go to read that file on disk, I should carefully assess which file is able to be read by the virtual machine. So typically containers were talking about single tenants running multiple containers as a deployment option for their software, but all within the one tenant. And you might in a real cloud, of course, have a single tenant who has a virtual machine, and then they choose to break up their virtual machine internally using containers that would be quite a reasonable and common thing to do. Now, when people talk about containers in practice, they're usually talking about one of these three, and there's probably more, but these are the three I happen to know something about. Systems, these are convenience wrappers around those core kernel features. They use the same kernel features. So they have roughly the same things that they can provide, but the, you know, the command line tools, the way you can configure it, is different in each of them, particularly networking is different. It's quite complex how you set up networking between containers, and so these three different systems offer you very different amounts of help to set up the networking side. But the file system side of stuff is pretty much the same across them. SystemD Endspawn, if you're not familiar with that, is a very simple, it's kind of the core guts of SystemD when it goes to start a service in its own container. It does that as a sort of security isolation measure, and SystemD Endspawn is just a way to use that for your own purpose rather than just running a SystemD service, a regular SystemD service. LXE was perhaps the first one of these, I think, and it's a little more basic in what it offers. It's a little closer to what the kernel primitives are. Docker is, of course, the new guy that everyone likes to talk about, and the main thing that Docker adds over the others is an ability to bundle up the files related to a container and put it on a server somewhere and then download it and install it again very easily. It's a little bit like apt-get for containers. You can just go Docker pull the name of a container, it'll go and find it on a centralized server somewhere, download it, unpack it all, make it available and just run it. So it's a very, very easy way to use containers. The interesting thing is that it's portable across distributions. The only thing Docker assumes is that you're running on an XH664 Linux kernel, and it uses the fact that the XH664 kernel ABI is very stable and standard. You can go between a Red Hat kernel or a Debian kernel or a Gen2 kernel, and the same file-open syscall is completely binary-compatible across all of them. So what this means is you can have a container which might contain a Red Hat file system tree, and so you're using the Red Hat libc and the Red Hat apache, but when it goes to issue a syscall to the kernel, it's totally compatible with that Debian container that has a Debian libc, so you're totally avoided a lot of the library ABI DLL hell that you get between distros, because we're just taking the kernel layer as compatible. As a software distributor container, Docker is quite interesting where you can provide a Docker instance, a Docker image of your software, and you can say I'm going to choose fairly arbitrarily this distribution, these libraries, these dependencies, my actual bit of software that I wanted to ship, and I can bundle it all up in a Docker image and make it available, and then anyone on any distribution can grab that same Docker image and run it, and it pulls down that whole stack of software and just runs it on the kernel that I have, because the kernel is fundamentally compatible. So this is a lot of why the excitement is around Docker, because it's quite a novel really to be able to do that in Linux. Right, so here's another diagram that you probably can't read unless it was stretched to five meters high. This one, you don't need to read the stuff on the right there. This diagram came from the Rackspace private cloud documentation. Rackspace deploys now their clouds for customers in containers. The primary components are deployed in LXC containers. And so this diagram on the right is showing you an example of a single host, which is broken up into a couple of container slices down there, the light gray horizontal slices. And you can see the first one is a networking container, the second one's a storage container, and the third one is an other container. And you can see the networking, the blue and the red boxes, are only made available to some of the containers depending on what they need. So only the networking container is hooked up to the tenant network, the blue boxes, and only the storage block and the other services are hooked up to the light red, the storage network. So we're running this all in a single host, but we now have a little bit more isolation between services. We have a little bit more security. We've limited what's available to some of the services over what we had before. And it's also a little bit more flexible. We can choose to pack some of these containers onto different physical hosts a bit more easily than we were able to do before. We can pick them up and move them. But otherwise it's all the pieces you already know about. There's a container running RabbitMQ, there's a container running Galera and MariaDB, there's a container running Keystone, all the pieces you knew about before. Because there's a little bit less overhead to each container, you don't have a whole new kernel, you don't have a whole new set of how do I boot my system up, you know, SSHD management sort of pieces. The footprint's a little bit smaller than it might have been if we had three virtual machines running on a host. So we can pack down the minimum footprint a little bit smaller. And I think this was one of the driving use cases for Private Cloud doing it. If you want to look more, there's all of the RaxFest Private Cloud configs are available on GitHub somewhere. Now, that's pretty good. We've solved a bunch of problems we had, we've gained a bunch of extra features. The difficult cases, it's not all roses. So the difficult cases with containers are anything that leans heavily on the kernel, anything that demands a lot from the local kernel, because fundamentally the kernel is still shared. So there's some things that don't have namespaces, some kernel features don't have namespaces integrated through them yet, and perhaps may never. One of those relevant to us here is iSCSI. So Cinder, which leans on iSCSI frequently, is quite difficult to run in containers. In fact, impossible if you want to use the iSCSI back end. And Neutron network nodes are quite possible, but they're more interesting, because again, they lean heavily on the networking features of the kernel. So you need to be a lot more careful in how you set up your containers. It's a lot more demanding of your kernel container environment. And other than that, we're still making manual placement decisions. Somewhere still has to decide I'm going to pack those three containers onto that host and those four containers onto that host. You're still having to make those decisions. You're still having to say that container's really big. Perhaps it won't fit with that other really big container. They both use a lot of RAM or something. And related to that, when one of those machines dies, and you have to make that decision again about where you want to pack those pieces and pack them on somewhere else, you're still having to do that manually again. There's a human who has to come in and modify your puppet configs or whatever you use to actually set up the containers in the first place. And say, actually, MariaDB should run on that host. They should run on the other host. And then typey typey to make things change. So really a lot of the OpenStack control pieces, if we put aside the hypervisor for a little bit, a lot of the control pieces, there are no very API servers. The Neutron server components, even Glance, and these sorts of components are really just like a regular web app. They're just a web server that takes the information of API requests, does some munging around, perhaps reads a few files off a database or something, does some database operations, and spits back a result. That's not new, that's not. That's a common problem we have everywhere. How do you deploy those nicely? And we're in a situation where we have or potentially have lots of machines. We want to worry about horizontal scalability. We want to worry about machine failure. The way this is dealt in other spaces is by treating like a cloud native app. There's a set of features. If you want to deploy somewhere on Amazon, you use these sorts of features. You use locking stores. You use object stores. You use these sorts of features to deploy it. You use load balances and reverse proxies and all these sorts of things like that. Perhaps we could use the same things for our OpenStack control jobs. I strongly recommend it. Thank you to Robert Collins for pointing me into this. TorfFactor.net is the TorfFactor app. It's a short book, short article you can read on that website, which really goes through and does a very good job of explaining what a cloud native app should be like and how you should write one. It gives you design advice for it and more things you should think about. It's definitely worth your time to read it. It'll only take half an hour or an hour or something to read it. So, again, the key features are typically you separate stateful and stateless, and your stateless servers can be restarted without too much stress and worry if you want to handle twice as many queries per second to your web service. You just run twice as many of them. And your stateful is where it's hard. You're storing files on disk. They're your database or something like that. They're a lot harder. So, by separating that out, you separate out the easy problem from the hard problem and you focus on how you're going to store things on disk reliably. You design for horizontal scalability from the beginning. Again, if you want twice as many queries or twice as many users, you want to be in a situation where you just run twice as many of that piece. I want twice as many glance file read operations. I just run twice as many glance servers, ideally. You assume from the beginning that hardware failure is normal because you're in a cloud environment where you may be thousands of machines. There's always one that's failed at any point in time. That's normal. There shouldn't be a human that has to wake up and deal with that problem. And then more from a software development point of view, it'd be really, really nice if our test and app production deployments were as similar as possible. And this is one of the big surprises I had coming into the OpenStack development community. The DevStack was so different from the recommended typical deployment. You would never deploy using the normal DevStack because it uses all these local only features. It's simply not a production deployment. And then basically we want something more like what we're used to from PazStacks for running our control jobs. We want some of these higher level services. So I did a bit of a look around. I looked at a bunch of the different ones that were available. And I personally chose to experiment with Kubernetes. You could certainly pick other ones. Kubernetes has a few advantages that I found over some competitors there. It's a very lightweight to install. At least at the moment, Kubernetes is single tenant only, which has the advantage for my case where it's very easy to install. I don't have to worry about configuring user accounts or anything like that. I just run whatever, a Kubernetes API server and then a couple of cubelets on each machine and install SED, which is similarly easy to run SED server on each machine. And I'm done. Kubernetes is installed. So it's a very lightweight layer on top of bare metal or on top of VMs. It fundamentally installs groups of Docker containers in something it calls pods. So it has a pod which is just a group of containers that will always run together. So you can have a bunch of containers that need to talk to each other always. It runs them in a pod. It fiddles with a networking setup so that you can assume they're all running on the same host, the same local host. So if you're running an Apache server in one container and a MySQL server in another container, put them in the same pod. They can refer to each other using local hosts. It's all on whatever port number you run your MySQL server on. It stores its state in SED. SED, if you're not familiar with it, is a key value store that uses the raft consensus protocol. Basically what it means is you get something that looks like a file path. It's not in the file system, but has that same name, slash, name, slash, name. And you say, I want to set that value to something, and it's fairly common to store a JSON blob in there if it's a complicated something. And then you can go to any of the other SED servers and say, read me that value, and you'll get back that value. It doesn't sound like rocket science. But it's also resilient to machine failure. It's stored on all of the SED servers, and only a majority of them need to be up at any point in time. So, for example, I typically run this on five virtual machines, and only three of those virtual machines actually have to be up for me to read and write those values and do it lively. So it's very resilient to a small number of machines failing. It'll continue to work and not lose data. It also has an interesting load balancer where Kubernetes chooses where to run your Docker containers for you, your pods, and then it runs them. So you need a way to find that MySQL pod from your Apache pod. So instead of hard coding up here addresses, or instead of using DNS, what you do is you go to the Kubernetes local proxy. It runs on every single Kubernetes node. It runs a little proxy and sets some environment variables inside your container, so it's very easy to find in a way that's portable. And so you say, I'd like to connect to my MySQL service and go to the port number that means the MySQL service, and the proxy knows about the Kubernetes internals. It knows about, oh, hang on, I'm running a MySQL pod over there, and as it happens, another one over here, and I'm health checking them, and I know that that one's responding to the health checks. So I'm going to forward your TCP connection to that one over there. So you're basically getting a load balancer without needing dedicated load balancer hardware. You're also getting a very scalable load balancer because it's not going through a central point. Every host is doing its own load balancing and sending you directly to the other back-end hosts without ever having to come up through a single load balancer and then down through to reach your back-end. And in the case MySQL is perhaps a bad example, but if you were going to a DNS server or an Apache server, something that scaled horizontally very well, you might be running five copies of them, and it'll take turns sending you to one of the five and sending you to a different one each time, so it'll load balance your queries across the servers as well. It also does health checks on the containers if you've configured them, and it can do checks straight to report number or it can run a command inside the container that has to exit zero, and it'll do health checks on the container to say, does this look like it's still working out or has it crashed or is there perhaps some hardware failure that's affecting the container? And then if it's failing the health checks, it will kill it and restart it somewhere else, and while it's working that out, the load balancer will not forward requests to your pod. So this takes care of a bunch of the challenges we had with the previous one. It works out where to run things for you. It starts in there automatically. As machines fail, it'll work out where it should run it instead, and it'll run them there as well. And you can even go so far as to say, this container should be this big. It should have these many resources to it. And so when you're trying to fit it on a machine, you should make sure you're not overallocating the RAM or the CPU or something like that. So, yeah, very briefly, Kubernetes talks about services, which is I'd like to make this service available to other pods, and it's going to run on this port and give this name, and it's going to find the backends that can provide this service using this sort of search query, basically. And then you have pods, which are the groups of containers that get run. And then you have replication controllers, which is a little thing that sits there, and you say, I'd like to run a pod that looks like this, and I'd like to have three of them. And it's sitting there all the time going, how many running now? How many running now? Are they healthy? And if there's four of them, it'll kill one. And if there's two of them, it'll start one. So it's always sitting there just making sure the thing converges. It's a very simple design and works quite well. And this is a very complicated diagram, again, on the right that I pulled from the Kubernetes design doc. But really, it's just, here's a single host, and it's running a couple of pods, which have a group of containers inside each one. And then they're talking via the proxy to a different host, which has the proxy there and more pods there. And then centrally, you have, or somewhere sort of not on every host, you have an API server, which just gives the Kubernetes API, this is an entry point. So, COLLR. COLLR is a part of Triple O project, and it's a bit of an experimental proof of concept where they are deploying OpenStack using Kubernetes in Docker containers, in their case on Red Hat atomic operating system underneath. It's on Stackforge. And their intention is to use it as a bare metal deployment option. So they also have all the dependency pieces that have MariaDB running in one of the pods. They have RabbitMQ running in one of the pods, and it works. You can install it. It's a full-running, fairly minimal install of OpenStack. The few hard cases like Cinder, they haven't dealt with yet, because it's actually hard, so they don't have Cinder as one of the services in there. But they do have Swift, and they have Glance and things. And Neutron. They have a Neutron notebook in there. So, recognizing that COLLR is only at milestone one, it's still a very proof of concept. The limitations they currently have are that they run only one pod for OpenStack service. They don't really use the replication controller feature of Kubernetes. So they've only got... They're not following the best practice, shall we say, for Docker containers and for Kubernetes yet. So they have fairly fat containers. They've got a Nova container that runs the Nova API server and MySQL inside the same container. Whereas, ideally, you would split them out. And they use a pod rather than a replication controller, so you've only got one Nova API pod across your entire cluster. So you don't have the machine resilience that you should be able to get and hope to get through a more sophisticated time. But it's not a bad first start, and it works. It exists and it works right now. So they only have one pod for each of the services. So we haven't quite got the scalability or the resilience to machine failure, but it is a good proof of concept and it works. It's quite encouraging and to be able to build it quite quickly. So in the future, and this is some of the things that I was looking at, ideally, you'd want to strip the containers down to the bare minimum you can get away with. So in a pod, you only want to combine things into the same container that have to be in the same container. And you only want to combine containers into the same pod when you have to do so. If there's any possible way of spreading them around, you'd like to do so. So what I've been building so far has the Nova API server is separate to the Nova Conductor server which is separate to every other piece that's possible separate to the MySQL. And I'm using replication controllers to start those because my API servers are a very simple example. You can run three of them. You can run five of them. And then you're resilient to machine failure and you can scale up and down for load very easily. Perhaps not so important, but in my case I started off building on CoreOS, which has an interesting feature, about its only real feature. CoreOS is a very minimal distro that's designed just for running Docker containers. It boots very quickly because it just doesn't have any other features. The interesting piece is how it upgrades itself. It basically has two disk images for the system. And so you've got the one you're using right now and then it has an upgrade service that runs all the time and it'll go and find if there's a new version of CoreOS available it'll download that and then it will, with the way I've configured it at least, take a lock in LCD, say a new version's available, I've downloaded it, I'm going to try to take the lock so only one machine reboots at a time. And when it gets the lock it goes, okay, I'm rebooting and it reboots onto the new version. And then when it's finished rebooting it comes back up again and it goes, yep, I'm done, that worked out, release the lock. And then if there's another machine waiting it reboots. And this works really, really well, combined with the Kubernetes sort of health checking features. I have 5VM sitting there with my little test deployment running this. A new version of CoreOS comes out once a week or more often if there's a security fix. I don't notice it happened. My machine is just download, reboot, while they're rebooting the health checking's failing for that particular Nover API replica. But my queries are being transparently routed to the other Nover API replicas. Comes back up again, another reboots. I'm totally ignorant that anything has happened and yet I've got a new version rolled out everywhere. The hard cases, of course, in such a dynamic setup and Kubernetes itself is really at the moment geared around web-serving type. It doesn't have to be web-serving, but serving in general, stateless jobs. The hard cases are Cinder still for all the kernel reasons and generally anything that uses storage. It's hard to run Swift under this because Kubernetes is quite relaxed about restarting on a different machine and that's a fairly big deal if you've got a lot of state stored there. Swift would actually deal with this OK because as long as you didn't move it around too quickly it should be able to replicate and keep up. But something like a MySQL server, this could be a disaster. If you happen to just casually restart it on a different machine you've got no database files. Storage is a harder case here and adding a change in general there. That's sort of stuff I've been working on. My goal is quite different too. I'm looking at this as... DevSec is an amazing tool. It really does a lot of work, but I would love for it to go away. I would love for the normal test setup to be much closer to what the normal production setup is. And we should be able to build something that can scale down. It's a normal production deployment, but we can scale down to one instance of everything, but we're still using the same mechanics. We're still using the same network drivers. We're still pretending like we have to talk across the network to our other components. And that should be not an exceptional triple O sort of feature. It should be normal. And so I'm trying to build a Kubernetes sort of base, Docker-based install that can scale down. In that case, if you're trying to test glance, you don't also want to test Swift. So if I'm assuming I'm already running in a perfectly functional cloud, I'm just going to lean on the already existing Rackspace public cloud Swift install, and then you're going to run glance in your actual test environment. So I'm making some of those problems go away. I'm going to use the databases of service to run my SQL for me, because that's not the interesting bit that I want to verify. But I've got a long way with that. It doesn't work yet, but I've got a long way with that, too. So it's, hopefully, so it will come from that. So that's it. There's some links to the caller. It's on GitHub. Kubernetes is on GitHub. That's both the documentation is on those sites as well. The readme.mds on those is pretty good. And Kubernetes has quite a bit of design docs and things available about describing how Kubernetes works. And of course Docker. And I didn't quickly find the URL for the Rackspace private cloud LXC configs, but you'll find them with a search engine. Sorry? I tweeted it. It's been tweeted. There you go. It's on the Twitter sphere. And any questions? I'm kind of running out of time, but please go to questions or any time later on, if you don't know. Just one. You get the one question. So the idea of running OpenStack on hardware that is pretty much the same everywhere and the containers moving around is very tempting. But at the same time, the efficiencies of having, you know, hosts specialized for compute, hosts specialized for storage, kind of makes sense because... Yes, particularly networking. You might only have some hosts that have full external connectivity, for example. Correct. They might not all be equal. So the question is what is your vision? Do you see OpenStack running just like Plaston have done running everywhere and the containers floating around? Or do you still see the specialization of nodes and containers like Swift being directed to the nodes where... to storage nodes? So Kubernetes is able to describe this. You can give fairly simple key value attributes to your nodes and to your jobs. And then when you're running a job, you can say only run it on underlying hosts that have these particular attributes. So it's quite possible to say these have high bandwidth, these have high storage. You should only run the storage jobs on the high storage machines. So that's quite possible. And I think that's the way it'll end up going. You want something about that flexible and I don't think you want anything more. I think that's sufficient. And again, it's very flexible. You can easily change that and scale it as your particular needs change. The storage case is still interesting. And if you wanted to run, say, Ceph or something under this, where you need to think a lot more about when and how you blow away a particular machine and move it to somewhere else, that's something Kubernetes does not deal with very well. You can make it work in there, but as a Kubernetes administrator, you need to be a lot more careful about what commands you type because you might accidentally shuffle the whole lot and lose all your storage. So I think that's solvable, but it's something you would want to be very careful of at the moment. And probably if you would use this right now, use this only for the easy jobs and the hard jobs, run them the old way. Run them on dedicated machines or outside Kubernetes, run them, perhaps even still using Docker, but don't schedule them through Kubernetes. Would be quite reasonable. And in fact, when I run it, I run RabbitMQ through Fleet externally just so I can assume it exists rather than through there. Come and ask me questions anytime.