 And if it wasn't for our sponsors, we couldn't have such wonderful, you know, place to have a great talk. And so I want you to know about the company that's sponsoring. It's called QConnect. And basically they will have a lot of meetups over there, like Linux LA and they host a UUAC and other kind of meetups. So they're actually one of the few super pro friendly, you know, user group meetup, I would say user group talent agencies out there. Quite often you have a lot of recruiters that just want to, you know, sell us off and it really kind of sucks. So QConnect is a company that specializes in networking and building relationships within Southern California to connect top talent to our clients in the technology industry. I've posted one of their flyers or actually they posted one of their flyers on the job board I have just outside. So do check that out. If any of you are looking for jobs, check out the job board. If any of you are looking to hire people, also check out the job board. I put a little bit link there to a spreadsheet you could add more information to. It's real simple, but I don't know. Now without further ado, here's Adrian Otto, who I'm sure you're going to really enjoy. Thank you very much. Thank you, Marty. All right. Docker, Docker, Docker. You can all go. Actually that's not what we're here to talk about very much. I work at Rackspace as a distinguished architect. And I'm the PTL of a project in OpenStack called Magnum, which is a container service for OpenStack. And I also lead the Docker Los Angeles Meetup Group. I live here in Redondo Beach, California. Today I'm going to cover, first I'm going to set a little bit of context for you. I'm going to talk about some science. I'm going to mention a service from Rackspace called Karina. I'm going to explain how that fits into the context of OpenStack Magnum. And then I'm going to talk about the three COEs, container orchestration engines that Magnum supports, and a little bit about each one of these so that you can get a feeling, a sense for which one of these should you gravitate towards based on the type of organization you have, what kind of workloads you have. So let's get into the science. First, software is a liquid. Liquids are a matter that takes the shape of its container. It has intermolecular attraction, meaning it sticks to itself. There's a property of water or fluid or liquid called adhesion and cohesion. So that's the sticking to and the sticking to itself. And the particles in a liquid are not fixed in position like they are in a solid. So that's why when you pour it, it flattens out. So software is a liquid and it takes the shape of anything that you pour it into or on. And hardware is a solid. So if what you want to do is run more bigger, faster, more awesome software, then maybe what you need is bigger, more awesome hardware. So one of the things that Rackspace does is participate in the OpenCompute project, which I believe is a sponsor and an exhibitor here at scale. We also participate in OpenPower. And we designed a computer system. This board is called the Barrel Eye. And those of you who are hardware geeks will recognize these specs are pretty big for a machine. So big in fact that if you were to run top on this machine, this is what it would look like. That's where your process list would start. Okay, that's 160 CPUs on one board. So that's called Barrel Eye. And this is a testament to working in the open, not only about software, which Rackspace is famous for and Linux has made proof point for it. But the same open development works for hardware as well. So bigger hardware means bigger software. But even with a machine like Barrel Eye, there is a limit to how big that machine can be. And so at some point, you're going to need something that will tie together multiple machines. So that liquid is now within some logical thing instead of a physical thing. It's a liquid inside of a solid, okay? That's application containers. Now, I'll make the point that containers are the most disruptive force in infrastructure technology since the virtual machine was invented. So in the last 12 years since virtualization has been commercially available, virtualization technology has totally changed the way we think about computing, the way we think about applications, the way we think about pretty much everything that I care about. And containers are about to change almost all of that in some way. So imagine a world where compute is instantly available, not in minutes but in seconds. And you can pay for it say by the second instead of by the hour. And imagine all the new things you could do with it if compute were that accessible to you. Well, it turns out that Rexby's has a product called Karina. It's new. It's in beta. You probably haven't heard about it yet. And it is a containers as a service that allows for instantly available compute using native tools and APIs with no infrastructure worry. Now, why is this in a talk about Docker, Swarm, Kubernetes, and MISOs? I'll get to that in a minute. This is how it works. So you decide you want a container cluster. You give it a name. And 45 seconds later, it turns into an active cluster. And then once you have an active cluster, you can start running containers on it. But before you do, you can set up your shell environment using a shell script that is downloaded from the Karina service. And instead of when you run containers, they run locally on the machine where Docker is running. They actually run in the cloud instead. So you see here, this is using the native Docker client in order to start a container. That's available for free today in beta. If you are interested, it's at getkarina.com. That'll be on the last slide as well. So software is a liquid. Now you need to choose what kind of containers to put it in. Now, I said I would talk about liquids Karina OpenStack Magnum. We haven't gotten to that part yet. Let's dive into that. So OpenStack started in 2010 as a collaboration between NASA and Rackspace and became the OpenStack project in 2010. It became the OpenStack Foundation in 2012. And I think this project is remarkable because it is the only large-scale infrastructure software that has gone from zero to widely deployed and trusted for production applications in under five years. Imagine back in 1996 when Linux was five years old. How many of you were running Linux in production in 1996 for mission-critical applications? That is five percent or less of this audience. You were all crazy in 1996. How many of you know someone who runs mission-critical applications on OpenStack today? At least half the room. Pretty awesome. So Magnum is a part of OpenStack that I work on. There are a lot of different projects in the OpenStack ecosystem. NOVA is the compute one. Glance is the image storage. Keystone is the identity system. I won't get into every single one of these, but there is a team working on every single one of these different projects that works with its own autonomous management. So Magnum is about combining the best of infrastructure software with the best of container software and combining that into an experience that makes sense. Because not all problems are infrastructure. Not all problems are container problems. They are not all app problems. Sometimes you just have infrastructure issues. And container orchestration software doesn't actually fix your infrastructure issues. It doesn't make your infrastructure programmable in the way that infrastructure software does. So there has to be some way that these two worlds fit together. And Magnum is working to straddle this. So in 2014 the OpenStack containers team founded the Magnum project. It became part of OpenStack. And in Magnum you can create an entity, a cloud resource called a bay. And a bay is a place that your container orchestration engine runs. So you can choose to run Docker or Kubernetes or misos in that bay. A bay is just a collection of compute instances that run the same software. Now, why would you want to have a choice here? Why wouldn't you just pick one? The reason is you might want to use a specific tool that uses a specific API. So you want access to a native API. You don't want to reinvent your tooling just to talk to some hosted service that has a weird interface that you need to lock in to that platform. Instead you just want to use whatever the prevailing open source API is. So this would give you a choice to run natively the Docker API, natively the Kubernetes API, and now with misos we can even run with marathon. Now you also have the choice to run either on virtual machines or on bare metal. So if you've got an environment where there's an existing cloud that is an OpenStack cloud that already has a virtualization implementation you just want to lay containers on top of that, you can do that. Or if you want to start fresh and have a bare metal experience where you're getting higher performance, better consolidation rates, you can do that on bare metal as well. Because Ironic is the service within OpenStack that allows you to deploy machines on demand entire hardware machines. So there are abstractions for three different key resources in every Magnum Bay. We have a concept of a bay. We have a concept of a node, which is the thing that makes up the bay. And we have a concept of a container, which you might or might not use. I'll get to that in a minute. If you decide to use the Kubernetes Bay type, you get some additional abstractions as well. Kubernetes has the concept of a pod, a service, and a node, as well as some other things like replication controllers. Those are all supported in the Magnum API. So why does Magnum even matter? What is this gluing together of infrastructure software and container software? Well first of all there's the choice of COE. And there are different reasons why you might want to run a different COE for one application versus another. And I'll get into that in more depth. But you might want the option to run these things concurrently. Swarm API or a cluster, a Kubernetes cluster, and a MISOS cluster all concurrently on the same infrastructure together. And in a multi-tenant way. All of the container orchestration software that existed when Magnum was conceived was single-tenant by nature. There was no way to share safely the resources of a cloud managed by a single container system. Because it didn't have this multi-tenancy concept. Well the Bay is that multi-tenant concept. It is the way that you isolate your cluster between different subscribers. That's what allows you to have this safety. So if you trust today that two virtual machines sharing a host are secure from each other, okay, we can argue how true this is. But if you believe that they're secure and that they are isolated from each other, then you can trust Magnum Bay's. Because that's what it's based on. It's based on the same isolation that is available in your in your virtual system. Okay next secure base. So how many of you have set up Kubernetes securely or every single component in the Kubernetes system is using a client certificate? Was it easy? Hell no. Hell no. It was not easy. Okay Magnum makes this easy. Magnum makes this one API call. And you get your Bay. And it is totally set up with all of the sign certificates in all of the secure way that you expect it to be straight out of the box. So all of this communication that happens not only just between the client and the API service, but between the master and the minion. All of that and making sure that the certificates get on the system in a secure way. You're not just blindly copying private keys over the network. Okay, that's bad. So all of the certificate generation, all of the signing, and all of the ability to make your containers implementation running on top of a private cloud or a public cloud compatible with the standard container tools. So I can take a Docker client and I can point it at a at a containers implementation running on a Magnum Bay and it can just work and it can be secure. Load balancer integration. So when you have a Kubernetes system and you say with a replication controller, let me scale out the number of containers that are running the service. It will do that. But it doesn't expand the actual infrastructure that it's running on in order to expand beyond its limits. Once you get to the limit that you can't create anymore, that you're done. Okay, but when you have a bay, bays are scalable. You can continue to add more and more and more nodes for as much as your cloud can continue to give you more and more servers, you can keep growing and growing and growing or shrinking as a case may be. So we will actually configure the cloud service load balancer that is available through the Neutron API to scale out as your Kubernetes cluster grows and shrinks and we can actually swell and contract the infrastructure that that cluster runs on. Okay, and then finally I talked about your choice between bare metal or virtual machines. Now if you just decide to set up one of these container orchestration engines straight up, you're like, I got this pile of machines, I'm going to put my Kubernetes on top and I'm going to be off to the races. You're not getting the choice of how to do this. You're picking one or the other. So a little bit more about Magnum before I switch to talking about COEs. So this is over a quarter million lines of code. It's one year old happy birthday Magnum. Yay. Just three days ago. 122 engineers from 34 different companies have contributed. And anyway, I think this is a pretty awesome thing. When you bet on an open source project that has a thriving community ecosystem, you're not taking a risk like you would if you were making a bet on a single startup and the software that they sponsored. So let's talk about the COEs. Swarm Kubernetes or misos and why? How many of you have read a Choose Your Own Adventure book? A lot of kids growing up in the 1980s and I guess even the 1990s. Hopefully this feels like a Choose Your Own Adventure book. But before I get to that, let's talk about the qualities of a, or the types of orchestration systems. So there are imperative systems and declarative systems. And they look and behave very differently. In an imperative system, you provide explicit instructions for what should happen within that system. And the system is stupid and you are smart. So you are the one in your template or in your script or whatever the input is to that system, you are providing exactly what the system should do. You're the master. And this is great because you have ultimate flexibility to change anything about that process you want. Any tiny little detail that needs to be changed about that process of getting from start to finish is adjustable. But it comes at a cost. It's a complicated input, right? Declarative systems are a little different. Declarative systems, you describe what the output should be. You say, I want this to happen. I don't care how you get there. I don't care what order things run in. I don't care what process you do. I just want that to happen at the end. And so the system itself is really smart and is really complicated. But it's also more rigid in that if I want this little tiny aspect of what happens in production to change, I might actually need to be going in somewhere and actually changing scheduler logic or code within the model interpreter to change what actually happens. And that could be kind of a nuisance. So first let's hit swarm. So Docker swarm is great if what you want is a imperative system. If you are a super smart badass, you probably like this. This is what Karina implements, right? When it gives you a container cluster, it is giving you one of these. Now how this works is when a, there's a thing called a swarm manager and there's a thing called a swarm node. When the nodes start up, they connect a container that's running inside the Docker name and connects to the discovery service and basically registers itself as belonging to the cluster. And this registry service is shared with the swarm manager. So the swarm manager has this concept of state. This can be implemented on the back end with a token service that is hosted by Docker, Inc. Using at CD. And you can also do it with a bunch of other stuff too. I think it works with console. I think it works with zookeeper. There's all kinds of back ends for this. Once a machine is registered, you get this illusion of a giant Docker machine. And you have the single API endpoint. Your Docker client talks to this one Docker machine, this like load balanced API thing. And it schedules the work onto the different swarm nodes in accordance with whatever constraints you offered. So you can hint that it should be matched with a system that has a particular tag attached to it. You can say like how much memory each of your containers are going to use so it will fill them up. There's different scheduling strategies so you can either have it go sequentially filling up each node from bottom to top, next one bottom to top, next one bottom to top, that's called the pack algorithm I think. And there's another one called our bin pack. There's another one called spread which is put one on one node, put another on the other node. So there's different reasons why you would want to run those different scheduling strategies. Because if you are running in a cloud and you're trying to cost optimize, you want the smallest number of nodes on the cloud possible. So you really want bin pack so that you've got your idle nodes you can kill off and the nodes that you actually have running are getting a lot of work. Whereas if you've got a private cloud and you've got a fixed infrastructure, you just want to make use of whatever you own. You want to spray all of your containers evenly across what you own. So that's configurable in swarm. So here are the reasons why you would prefer swarm over other COEs. First, you love using the Docker tools. If you're a Docker fan, swarm is for you. Next, if you prefer an imperative system, even if you're augmenting the characteristics of your imperative system using declarative tools, if you attended Jerome's talk yesterday, he showed Docker compose, which gives you a declarative interface to Docker and therefore to Docker swarm. And therefore you can have a declarative interface to an imperative system. So you get kind of a balance of both benefits. You get a simple interface to something that you can still customize. If that sounds good to you, thanks swarm. Next, if you want to combine running containers that are wrapped around legacy applications, as well as containers that are running cloud native applications, meaning things you wrote intentionally for running on the cloud. They were conceived in the cloud and you want those both to coexist in the same clusters together. Swarm allows this more easily than other systems do. Another reason you might choose swarm is if you've got a whole lot of machines. If you've got in the large hundreds, small thousands range of machines, cluster works. Swarm works to run that. Kubernetes. So Kubernetes is just over a year old. It has a very high velocity of development. Similar to Magnum, there's a whole lot of activity there. It's evolving very quickly. Originally started by Google. It is really, really good for running web applications or the back end for mobile applications. It has concepts for pods. A pod is a grouping of containers that run on the same host so that you can run patterns like sidecar. How the sidecar pattern works is you've got a container and it's got this other container that assists that to do something, like a logger or a monitoring process or something related to that container that has to be attached to it. Or it needs a really low latency between your containerized process and some supporting process. Maybe you've got a message queue and the latency between the message queue and the app layer needs to be extremely fast. Defining those in terms of a pod is good. Swarm doesn't do this. Kubernetes does. Miso also does depending on which framework we're using. It also has this concept of tags, which if you came to Brandon's talk, you saw a little bit about this. But every resource that's in a Kubernetes system has this metadata attached to it. It's got this arbitrary text data attached to it that you can use to relate all of the different elements of an application together. So if you're doing microservices, you might have an app that's actually built with dozens of different components, maybe hundreds of different components. And having them all related is really important, keeping track of all of that. This is something that Swarm doesn't have. But Kubernetes does. There's also this concept of a replication controller. This is what the architecture looks like. So this concept of a replication controller, which is not pictured here, that would say, okay, there must always be this certain number of this resource running at a given time. So if my replication controller says container count needs to be five for this service, it will make sure that if hosts die off and you get a count of three or four, that it goes and builds a couple more. If the count is supposed to always be one, if that one dies, it'll automatically start a new one up. So replication controller is responsible for that. Again, Swarm doesn't have this concept. So why would you pick Kubernetes? Well, number one, if you're a Google fan and you think that their stuff doesn't smell. In all fairness, everything that's been running at Google for the last decade, every single service that runs in Google that you interact with every single day, everything relating to search, everything relating to their main applications, all of it runs in containers. It has for longer than we've called them containers. In fact, a lot of the original code in the Linux kernel that implements username spaces or I should say namespaces at all in the kernel came from Google. The most simple of which was chroot, which has been there probably since I was in diapers. Okay, that's the most simple of all the namespaces in the kernel. But the more interesting ones, a lot of those came from there. These guys have done it at scale. It runs on what I would guess to be millions of cores. If there are Googlers out there that can correct me, but they actually do know what they're doing. And so there's some weight to this number one. Number two, if you prefer an imperative system, if what Kubernetes does is sophisticated enough for what you want, you should probably use it. If what it does is not sophisticated enough for what you want, then this is going to be an issue. Third, you really care about cloud native applications primarily. You're just using this for greenfield stuff. You're not trying to run legacy applications on it. Legacy applications don't fit great into a Kubernetes cluster. Or you've got a pretty big cluster, but it's not ginormous. Now I picked the number 200 here and the number 200 comes from conversations that I've had with the designers of Kubernetes. It's supposed to work well at this scale. It could work well at a larger scale, but most applications that are designed in the pattern that are expected to run in Kubernetes today have scaling constraints that make this impractical to scale beyond these limits. So if what you really want to do is run a giant cluster, you will need to make very deliberate choices about how those applications behave. And you can't just expect to drop it in the cube file and have it scale to thousands of nodes. I'll pause for a question, yes. The question is, Google runs at large scale. Why doesn't Kubernetes work? Kubernetes is an open source implementation of everything that Google learned about running containers internally. What they run internally is massively scalable. What is in Kubernetes is designed to be generally useful, and that's the difference. It's an intractable problem to make all applications massively scalable. I'll leave it at that. So let's talk about MISOS a bit. So MISOS in many ways is older and cooler and more stable than Kubernetes. Now that may change over time. It launched in 2009 and it is awesome at doing task-based or job-based workloads. So if you want to schedule an asynchronous task to go run, and you want to have it act at web scale on some giant amount of data or compute, it is really, really good at that. It's also got all the high availability stuff built in. Now there is high availability built into built into Kubernetes. In fact, Magnum supports multi-master Kubernetes. You might have seen that on a prior slide with just a single config option. You can pick that. This uses a ZooKeeper cluster to have a shared state with a leader and multiple standby nodes. So you're never going to lose your cluster state if that's deployed properly. It also has the ability to run different frameworks concurrently. Now there's a bunch of frameworks. There's Marathon. There's Chronos. There's another two or three. Of course, I can't think of them because I'm on stage. But there's four or five different frameworks that you can plug in and run concurrently on the same cluster at the same time. In fact, there's even some work to run Kubernetes on top of MISOs. Nobody actually does that in production yet that I'm aware of. But if you are doing that, please let me know. It would seem really cool to me. So you could decide to schedule work, say, on a Marathon cluster and on a Chronos cluster simultaneously sharing the same infrastructure and the scheduling of all of that work be coordinated. The way this works is the MISOs slaves communicate with the MISOs master and they say, they provide this offering. They're like, I have this capability. I have this amount of compute. I have this amount of capability. I have this amount and that's provided as essentially an offering to the master and the master now knows how many resources it has and schedules accordingly. That's how the system works. There's also offerings from the master. It'll offer back down to the frameworks that are running on the MISOs slaves to say, here's some work to do. So the offers work both directions. So why would you prefer Apache MISOs? So first of all, if you're a big data house, if you've got a giant Hadoop cluster, you're running all kinds of stuff. You've got a petabyte of data. If you're that kind of a shop, MISOs is for you. Next, if you're the kind of company that has an IT department that's big enough to have an infrastructure team, meaning people who are employed for the purpose of running our infrastructure, MISOs is for you. Third, if you want to be able to have lots of different giant workloads running at the same time, a lot of us have like one giant thing we need to do or two giant things we need to do. What if you have 50 giant things you need to do, right? MISOs is better at that than the other systems would be. Or if you've got a cluster that truly is ginormous. So if you're in the 10,000, 20,000, 30,000 node range, it is probably the only one that's going to work right at that scale. So here's the money slide. By the way, the slides are going to be downloadable. You'll just go on the scale site to get them. But this is the Choose Your Own Adventure map. So if you were going to write the story of pick your COE, this is what it would look like. You would enter, it would ask you, are you a bad ass? Yes or no? If you're a bad ass, then are you a big data shop? Yes? Okay, then MISOs is for you. No? Okay. Do you have a cluster that's bigger than a thousand nodes? Yes? Okay. Then do you have your own IT team, a giant IT team with an infrastructure group? Then yes. Okay, MISOs is for you. See how this works? Or if you're not a bad ass, and you do need to run legacy applications, then maybe some more MISOs for you. This is how this map works. So now you've all got my opinion on what kinds of organizations and what kinds of workloads are best suited for your kind of container orchestration engine. And that brings me to a review. I made the point that software is a liquid. The container for software is hardware, unless you're using application containers. In which case you've got liquids inside of solids. I talked about Docker Swarm, the native Docker clustering solution. I talked about Kubernetes, Google's opinionated view of container orchestration. I talked about Apache MISOs, which is the multi-framework solution. And I'll be ready for your questions now. Okay, question here. Declare it of an imperative, yes. Swarm is imperative by nature. Yeah, let me go back to this. Sorry, I went past it. Okay, here it is. Swarm is an imperative system in nature. You can customize what happens in the deployment process. Kubernetes is declarative by nature. You're going to supply a YAML file called a Kube file. You're going to use the tool called Kube CTL to bump that file into the cluster. It's going to interpret what you asked for and it's going to do its own thing and make that happen for you. Did I say the wrong thing in my slide? Sorry about that. Why pick Kubernetes, that slide? When you prefer a declarative system, sorry, that says the wrong word. Yes, so it should say declarative here. Sorry about that. MISOS really depends on the framework. The task-oriented ones tend to be more imperative by nature. But it really depends what you're running. I mean, if you're running Kubernetes as a framework on top of MISOS, then it's declarative. Right. So it depends. But in general, it would be, as a category, it would be more in the declarative world than the imperative world. Yes, you can run Docker Swarms as a framework on MISOS as well. In fact, you can run just about anything you want as a framework on MISOS. As long as it can do the protocol of offerings. Good point. Okay, the question was, can I give examples of why you might want to run a container inside a VM versus running it on bare metal? Okay. Virtual machines interact with hardware through a hardware virtualization interface. There's essentially an emulation of the hardware. The number of operations in the set of hardware virtualization is actually relatively small. Containers, when they interface with the kernel, use the Linux syscall interface, which by comparison is a giant interface. It is something like 480 something calls as of Linux 3.11, I think. If your concern is a security attack surface of the security isolation between these things, it is much more difficult to secure the Linux syscall interface than it is to secure hardware interface, which is why in general virtualization is a better tool for security isolation than containers using namespaces and C groups and other features of the Linux kernel. So if I was talking from a purest perspective, I would say virtualization is the right tool for security isolation, but it comes at a cost. It comes at a performance penalty. Some applications run a third of the speed when they're virtualized versus when they're not. So depending on what the application is doing, it might actually be really expensive to run it inside of virtualization. You might be willing to sacrifice some attack surface as a way to get more performance. And in some cases, you don't care about the security isolation. You might be in internal enterprise running 50 different microservices that are all part of the same application and you're not concerned about their isolation between each other. And if that's the case, running on bare metal makes a lot more sense. So if you're primarily motivated by security isolation, then running your containers on top of virtual machines can provide a better isolation. It can also provide access to a lot of infrastructure features that you wouldn't get necessarily from your container. Your infrastructure may be able to offer you software-defined networks, software-defined storage, load balancing capability, all kinds of stuff that may or may not be available in your container orchestration environment. So that may be another reason why you might want to combine the two together. Did I answer your question? Okay. Next right here. So the comment is, to do security right, you really need to have not just an exoskeleton around your network, but actually be secure all the way down. Completely agree. Yeah. So which one of these systems is most suited to data locality issues? Mesos, depending on the framework. If you're running Hadoop as the framework on top of Mesos, it has locality logic built into it. So it would be very good at that. So if that's something that's critical to your app, then Mesos is probably the best of these options for you. Okay. So the question was, can I elaborate on the growing and shrinking not only of the infrastructure that the orchestration system runs on, but scaling of the application itself? Okay. So how many of you think autoscaling works? Raise your hand. Autoscaling. Does autoscaling work for you? Who has production applications where autoscaling works for you? No hands. What, one, two? They're kind of halfway doing it. You're like, yeah, it sort of works for me. The truth is, autoscaling in cloud does not work today. And the reason why it doesn't work today is because it takes too long for the system to respond to the change in stimulus. The feedback loop is too slow. And containers start a lot faster than virtual machines do. So it becomes possible to have a more adaptive system that is more responsive to the stimulus so that you can have this elasticity that more closely matches the workload. And hopefully your cloud providers can begin to offer this to you in a way where you're only paying for things in smaller increments of time. So you actually do want them to be very, very, but more elastic than we can be today. They can scale it faster. They can scale down faster. That saves you more money and it's more effective use for the cloud providers so they get to sell their resources more widely. It's good for everyone. So for this reason, we want autoscaling to be built into these container orchestration engines. Almost all of the autoscaling algorithms that exist today for cloud are really stupid. And they're really slow. And they only grow by very small amounts, like add a machine. A load is still too high, add a machine, load is still too high, add a machine. A load has been really, really good and really low for a long time. Try subtracting a machine and see what happens. Always performance bad, add one more machine. Okay, here we are. That's how the algorithms work. They don't provide any predictive analytics or prediction on how many machines to add and how quickly to add them because they take so long to add. And containers provide us just a whole new world on what's possible because if you've got proper layering of your application and you're able to actually start these things really fast, then autoscaling becomes possible. But this brings up a problem. And it's a long way of me saying this to get to the problem which is you've got both scaling of the app on an infrastructure that is already deployed. Okay, that's relatively easy. And then the scaling of the infrastructure that that cluster is running on, which becomes much more difficult if you're not using a system that is using a combined source of truth. So what you actually need, let's say in the case of Kubernetes, right? What you actually need is Kubernetes, whatever infrastructure it has deployed right now, if I'm scaling out, it needs to use up what I've got. And then it needs to place an order with the infrastructure system to get bigger, right? And then when I've got enough, it needs to place orders for it to become smaller. But what I shouldn't have is two separate systems with two different controllers managing infrastructure scaling down here and application scaling up here, which is if you're using any of the popular platforms as a service today, that's exactly how they behave. They're dual controller systems. They are not coordinated properly and they do not work right. You get pathological behavior once you actually start to get a lot of activity in the system. And so how we deal with this is we disable autoscaling. And we press a button to scale up, press a button to scale down, and human beings are responsible for deciding what to do. So in a perfect world, what we will have is Bayes that automatically scale in coordination with the COE. The COE will be responsible for deciding when the Bay is going to scale. And the decision, the criteria for when you're Bayes scales is different for every application. It's not just my load average is high, it's a scale, right? It might be, I'm utilizing my storage, I need more storage, put storage online. Or it might be, my queue length is non-zero right now. Or my queue length is above threshold right now, scale, right? So that criteria for when you scale needs to be programmable. So hopefully I've answered and elaborated to an extent to say what the problem said is. No container system that I am aware of does this yet. So this is all still to be developed. I am aware of developers in the Magnum project who are working on extensions that will do this. There's a project called CENLIN, S-E-N-L-I-N, that is an auto scaling service for OpenStack that is programmable in this way and could act as a coordinated controller. There's also this concept of eviction. So you can have like an application that's scaled, and then another application that needs to scale, but you've got a fixed amount of resources, but it can decide to shrink the other application to make room for the more important one. So CENLIN does that as well. So hopefully that answers the scaling question. Yes. AWS Lambda will scale well. I agree. No argument for me. Yes. Is there CoroS and Rocket support in Magnum? Is that the question? Yes. In fact, Magnum does have CoroS as one of its image types. So when you build Bayes, you can build them with CoroS as the operating system on the Bay, and you can get all the stuff that comes with CoroS as a result. So yes, it's built into this whole picture already. When I'm talking about containers on bare metal, the question is, what am I talking about when I say container on bare metal? Is that the question? I'm talking about a host that is running a COE agent. So in the case of Docker, it's running a Docker name. So yes, it is dedicated to the purpose of running containers. Oh, the money question. He asked, is Corina running on, is it running inside a VMs or running a bare metal? Not exactly. I can't talk about it. It's just confusing. The answer is neither. So it is actually running, it uses OpenStack. OpenStack has a few different kinds of flavors of hardware that are, of servers that you can start, right? One flavor would be the bare metal, which I talked about ironic. Another one would be a virtual machine. And there's a third one called an LXD container. So you've got a, you've got a compute host that's dedicated to running LXC containers into which we provision Docker daemons, of which we arrange in clusters, combined with all of the security features in the Linux kernel, right? Mandatory access control, memory, randomization, among like five or six others. Like every single security feature you can think of that's in the Linux kernel for security isolation is utilized in that first layer where it's LXC containers. And then Docker is managed in a layer below that. And that's important because in Corina, when you use a, when you run a container, you get a bare metal performance experience, not a virtual machine performance experience. That's different from what you would get in a, like a, in another cloud, in a, you know, hosted container service. All of the ones that I'm aware of are running some form of virtualization in order to, well, with some exceptions, right? The major clouds, that's how they do it, right? The Microsoft's and the AWS is in the Google's of the world, that's how they do it. There are some more niche clouds that, that do something like Corina does. Sweet. That's like, how'd you like that? So Adrian runs the Docker meetups. So, and in Indonesia, we also have one stack meetups over here in Los Angeles and Pasadena. So do check those out. And I think you've got a last slide that, wow, this cuts in and out. Wonderful. You got the last slide on that, right? On your contact information? Sweet. Thanks.