 Hello everybody, and welcome to the joint OpenStack Kubernetes environment. I think I gave it a more hybrid-y, cool-sounding title. My name is Rob Hirschfeld, and we're going to spend about 40 minutes talking about OpenStack's favorite topic, which is Kubernetes. And we're going to go through, this is supposed to be an operational, very pragmatic assessment of this joint OpenStack Kubernetes environment. This is something that six months ago, a year ago, I was very skeptical of. And so I've been giving this presentation in a series of progressive updates. So this is our May update. The short version, if you want to catch another session, is yes, we can put, this is a Kubernetes under OpenStack talk, and then Kubernetes becomes the dominant platform. And we'll talk about all that. That's my short summary, that's it. My name is Rob Hirschfeld. I've been in the OpenStack community for quite some time. I was a board member for four years. I'm currently the co-chair of the Kubernetes cluster ops CIG. So if you're an OpenStack veteran, and you remember the days before we organized operators, we're trying to avoid that mistake in Kubernetes. And I'm also the founder of some open source projects. One is called Digital Rebar. If you're an OpenStack veteran, you'll remember a project called Crowbar, which is still SUSE's installer. I was the founder of that project also. My company, Rack N, specializes in hybrid automation. So we work on underlay, so physical, hybrid cloud type of deployments using Chef, Ansible, Puppet, Salt, whatever. We do a lot of that. In the past, I was at Dell. And I've been doing data center operations work since 1999. So the framework for this talk, and we will get to some very detailed components of how this works and how it goes. But it's important to me to frame it so that you understand that we're talking about operational needs. This is not a Kubernetes dev talk. It's really about how do we help operations succeed running OpenStack. And operators are not developers. They have different needs. They want things to be very simple. They want them to be transparent. They don't want to be hacking the code base of the platform they are trying to get running. They want it to run. And one of the things that's important with this is that we don't want people, our operators, to be asked to become super users of the platforms that they're trying to deploy while they're learning how to deploy them. So that means you don't try to make Kubernetes operator learn Kubernetes, right, not initially OpenStack operators shouldn't have to learn OpenStack initially to run it, right? My experience is that running the platform is very different than using the platform. They're actually designed to have different abstractions between what you use, which you're trying to hide, and what you actually have to deal with to make that platform run, right? So when you're dealing with an infrastructure platform like OpenStack or Kubernetes on metal, you have to deal with RAID and BIOS and drives and networks and a whole bunch of messy stuff. Once that platform is installed, your users should never see that. The whole purpose of those platforms is to obfuscate the mess of infrastructure. And then the number one thing that we learned early in the OpenStack days, and I think is important in any platform, is upgradability is a number one operational concern, right? Operators, unlike developers, don't keep installing the platform on a daily or hourly basis as they test it and change it, right? We want that platform to be upgradable so that as we find bugs or patches or something comes out, we can fix it. Saying, just tear down your Kubernetes control plane or your OpenStack control plane so I can put in a patch is not an acceptable answer. So in general, Kubernetes has very good semantics and operational foundations for this type of infrastructure pattern. It's actually very well designed to be upgradable, to be replaceable, to be easier to understand and it's actually a vast infrastructure designed for people to do HA upgradable deployments that have a degree of self-maintaining. And so that's one important thing about this. And then there's a second piece that to me is evolving out of the Kubernetes under OpenStack story, which is shared operational best practice. So one of the things that's been missing in the OpenStack community, somewhat on purpose and somewhat by accident is shared operational vision. If you go to the ops meetups, you'll find that every scale operator has their own puppet chef, Ansible deployment scripts, right? The vendors have their own operational deployment scripts. And so typically when somebody finds an issue with operating OpenStack, nobody else benefits. And that to me is a serious problem in an open community. We want our operational improvements to flow throughout the community. So if Red Hat learns how to run OpenStack or Kubernetes better, we want everybody to benefit, right? If a scale operator like Google makes operations better, we want that to come back into the community and benefit. Not just by fixing the bug, but operations and how you run the system is as much a part of an operational value as the actual software and fixing bugs in the platform. Okay, and just to make sure you understand the roadmap with this, because Eric Wright was giving a talk earlier today or yesterday talking about the Kubernetes sandwich. In this case, we're talking about the bottom, right? I'm not worried about running Kubernetes on OpenStack. We're really talking about how do we make OpenStack run on Kubernetes. How many of you think that that's even a good idea? Good about 50-50, excellent. So I was very skeptical about this, and we'll explain it why. I'll decompose that a bit, but this is about the underlay. And then one of the things that's really important to me in this discussion, in the whole consideration, is that it must work with Kubernetes primitives. So this is not, right? My ground rules for this whole talk is that we're not talking about using Kubernetes and then hardwiring it so you need an external scheduler to place containers or generate and place services. We're talking about using Kubernetes primitives so that you actually have an operable cluster that you can use for other things, and so that the work that we're doing to run OpenStack on top of Kubernetes leverages all of the operational constructs for Kubernetes, okay? And we'll explain how that works and why that works. But if you're installing Kubernetes and then massaging a whole bunch of stuff in it, that to me is a fail. It's not really using Kubernetes. So I'm gonna take a second and describe Kubernetes. I'm not assuming you know Kubernetes that well. It is a container scheduler. We can have a cage match later about whether it's an orchestrator or a scheduler. It's easier in my mind to think of it as a scheduler. That means it positions containers. Does some things to keep them up, but for the most part it's not making a lot of decisions. That would be a platform as a service type thing. And it provides some really robust APIs to restart place, deal with networking and lifecycle of containers. So you can say, take this container and replace it, right? Take this container and keep it running. And things that are working with Kubernetes are designed for Kubernetes. It is not magic pixie dust to take your legacy monolithic app and turn it into this wonderful auto regenerative thing. It's designed to work for something called 12 factor applications, which have a different configuration pattern than say OpenStack. It's designed for immutable infrastructure, meaning that you don't patch things in place, you swap them out. Very simple definition of immutable. And it's meant for things that are service oriented. OpenStack is a good example of a service oriented application, where we have a lot of different services that have to interact together. Probably could be smaller from a Kubernetes perspective, but it is a good service oriented app. If you wanna see, this is a map we built in cluster ops that describes Kubernetes. If you're used to OpenStack topologies, this is probably just as scary because there's a lot of services. Just like Kubernetes is a cloud native app, so it is designed with services. But you don't have to worry about a lot of these services. Really the heart of Kubernetes is very, very, very simple. It has a container runner called kubelet. It has a centralized coordination point called the API. And then it's back ended with that CD as a database. There's a lot of other little pieces that plug into that and monitor it and there's adjacencies that we can worry about. But fundamentally, it's a service that runs on a node to schedule containers and a centralized database that tells that scheduler what to do. So the first thing is Kubernetes, so if you've been listening to things going on, Kubernetes is rainbows, right? We are now, we've decided OpenStack is old hat. We've switched to Kubernetes because we're gonna find this pot of gold at the end of that rainbow. For people who don't know, Kubernetes is named after a Steersman or a Helmsman for a ship. So you'll see a lot of ship analogies and the logo is actually a steering wheel for people. It's not gonna generate rainbows. It is pretty cool stuff. I'm very excited about what I see in community and the types of abstractions that we get. It does not solve all problems. And some of the problems, this chart, I've been updating as I go and changing these arrows, some of them started as read. We've been moving through. The OpenStack problem, operations problem is not solved, right? That's the first starting point. You gotta realize, if we thought we already had the perfect way to install and operate OpenStack, we wouldn't need this conversation, right? So it's not solved. What I have seen is that the new deployments, not all of them, not all the historic ones, but basically every new deployment I see is using containers as the packaging mechanism. There's a lot of good reasons for that. Containers is really valuable. So we're seeing containers all over the place in OpenStack deployments. Kubernetes is awesome at containers. Anybody who's using more than three containers likely needs a scheduler. There are a couple of them on the market. Docker, Rancher, Kubernetes, Mesos. There's actually like four or five other ones. Kubernetes is getting a lot of mindshare. It's a good community. Kubernetes is, and this is not entirely true, stable, simple and secure. We are still working on deployment operations capabilities, right? Moving OpenStack to run on Kubernetes, if Kubernetes isn't actually enterprise ready, is not gaining us anything. The velocity of that transition is pretty good. We've been doing a lot of work. My company does a lot of work helping run an ansible playbook set called Cargo that we think is really exciting. And that shows HA, enterprise readiness, secure deployment capability. So we feel like Kubernetes is good enough to be a base for this type of platform. And then the last point here is that if you have Kubernetes, you get upgrades in HA for free. That's not true. If your application is designed to work with the Kubernetes patterns, then you will see upgrades in HA fall out of that pretty easily because of the semantics that Kubernetes gives you. But your application has to be ready to take advantage of that type of infrastructure. And that is where we'll probably spend a lot of time in this deck. So before I talk about challenges and problems, I want to talk about some of the benefits. There are very real benefits to this approach. And as a sign earlier, this was one of my last slides before. I think it's actually important to pull it forwards. Because we are talking about a real alternative for open stack deployments, right? And so we're already doing all this docker work, right? The patterns that you get for upgrading Kubernetes are real. The challenges you have to conform to them. So if we conform to what Kubernetes expects from an upgrade pattern, we do get that type of benefit. And we, as a community, need it. There's a really good job scheduler for maintenance. So when you want to do routine tasks that do housekeeping, you can throw them in as background tasks. And that's taken care of for you. There is a free fault tolerance where you can say keep this many containers running. It's not as simple in open stack because you end up pinned to different pieces of hardware to run your virtualization infrastructure. So you get some benefits in the control plane. There's a big one, if you assume that Kubernetes is going to become a dominant platform, then somebody who wants to run open stack could show up and just install open stack on top of that Kubernetes deployment. That's my next slide. And then the other thing that's a benefit to a lot of people and thinking about this is it's very constrained. So if and when, I would say when, the open stack community embraces Kubernetes as a deployment, as a deployment, dominant deployment choice. It's going to drive a lot of decisions because it's a more constrained environment. So you're not gonna have all the options that we have enjoyed as a community. That's good, there's pros and cons for that. But fundamentally, what we want, what the community in my opinion needs very desperately is reduced friction for these deployments. And that's really one of the things that we want to look at. We want to make it much easier to deploy and maintain an open stack infrastructure. And we want to be able to do it inside of a growing community. One of the benefits of Kubernetes that's important to understand is that Kubernetes is a cloud infrastructure. So it has very low friction to deploy and install. If you want to run Kubernetes, you need an Amazon account and you can have it running in about five minutes. That is not true with OpenStack. OpenStack, I mean, we have DevStack, but that's not a running infrastructure, that's development tooling. You need servers, you need to network them together. You need to do drives and all sorts of stuff, and it takes time. Even if you're doing a managed service, right? It takes time to get an OpenStack infrastructure running. So, this is coming. I was on the skeptic side. You can go back and watch my earlier talks. I really thought that this was a bad idea initially, but it's coming. And I think that it's very important for us to be pragmatic. There's been some great talks. AT&T did a talk earlier today. I've seen a whole bunch of other things going on about this Helm chart approach that we're going to outline about how to actually get this to happen. So I see a lot happening in the community. Even since these slides, I've been watching people do very similar work as I'm going to describe in multiple projects. So we're actually seeing some forking and competition in that approach. Not my favorite thing to see, but that's what happens in open communities. All right, so before I talk tech, this marketing message is confusing. Right, it would be irresponsible to not say, hey, wait a second. OpenStack on Kubernetes, the thing that the market hears from that is Kubernetes. And that's true. One of the challenges with this is that as we describe this type of positioning, the clear message that people get is that OpenStack is not as big. Kubernetes is much bigger. And that's the reality I think of the message. It also hurts, and I think the foundation has really changed away from this. I missed the keynotes this morning, so maybe they proved me wrong this morning. But they've been moving away from this OpenStack One platform message. If you think back to Barcelona or Austin, the message for OpenStack was one platform, containers, metal, and VMs. If you're putting Kubernetes under, it's not as clear that that's the message. So OpenStack has to evolve, it has to respond to these challenges. There is no doubt that Kubernetes, in this model, is a challenge to where people have perceived OpenStack to be in the past. That's the way the technology goes. And I think there's also some confusion between container operations and what this is. So this is very much about Kubernetes deployment. Containerizing your OpenStack deployment, it's a great idea. A lot of people are doing it, but it is not the same as what we're talking about. The purpose of this talk is to lay out Kubernetes as a management manager to control those containers. There's a lot of ways, I've seen tons of them, very creative, to use Ansible and Chef and Juju and things like that to place containers on servers and then manage the containers. It's a different strategy, right? Cuz you still have to manage those containers using something. In this model, you don't, you use Kubernetes to manage it. I've been beating on that point, but I think it's really important. One, this slide just repeats that, so I'm not really gonna go over that. Except to point out that the idea here is that we need to be thinking of ways that OpenStack adopts Kubernetes principles, right? So 12 factor application. You need to be able to handle containers being added and removed. There are operational requirements for Kubernetes that OpenStack is going to have to respond to as this process gains momentum. And those are things like containers changing IP addresses and going away off the infrastructure. It's the idea that they're immutable and you can't configure them. So you have to have ways to get configuration into containers that's important. So there's a lot of aspects of creating a Kubernetes microservices application that were not factored into the OpenStack designs. OpenStack designs assumed service persistence. And Kubernetes applications do not assume service persistence, if that makes sense, okay? I'm not seeing tons of people nodding, but I'm not gonna spend more time, we'll get into Q&A and that's confusing, we'll explain it. So I have a much bigger architecture picture. But I want to have a little bit of text to prep this. So Kubernetes is now in version 1.6. The 1.5 release brought in some critical primitives that we needed to make this work go. So before the last two releases, we really couldn't have done OpenStack on Kubernetes. And one of the key elements for this whole process is something called Helm. Helm is heat, this is a horrible analogy, but it's workable. Helm is heat for Kubernetes. So you can create a Helm chart, it's a YAML file that describes how an application is positioned and how it interacts with other applications. So it's sort of via, it's like you would take heat and spin up multiple EMs, you can take a Helm chart and spin up multiple containers and wire them together and describe requirements. If you're used to Docker, Docker has something called Compose. Helm and Compose are very similar things. Helm is not Kubernetes. Helm is an add-on into Kubernetes. Kubernetes has worked really hard to keep a very small core. And so the community resists adding a project like Helm into the core to keep things like that in ecosystem. So you will probably see alternatives to Helm surface and that's normal. But the efforts that we're seeing do use Helm, it's the dominant container scheduling infrastructure. OK. So that's important. There's something called tagging. So what you can do with Kubernetes is you can say this node has these attributes. And then when you go spin up a Helm chart, you can say put the Helm chart on nodes to have these capabilities. So you could make them compute nodes or control nodes or seph nodes or things like that. And so you can actually have an affinity within the infrastructure based on tagging. That becomes an important component in this. And so it's part of building a cluster for OpenStack is to tag things correctly. There's a technology called Daemon Set that's very important for this type of work. So in Kubernetes, normally you want to just run containers and let Kubernetes schedule them. You do not want them to be privileged containers. There is an option to create a Daemon where you can run a container as a privileged container. So without that capability, you couldn't run Neutron and allow it to have access to the network, or Swift and allow it to have access to the drives. You're going to have the containers operating in a more isolated mode. And that's important because it's part of the protocols that allow Kubernetes to do OpenStack deployment, which doesn't act like a normal Kubernetes app, because you're going to have long running processes and long running containers. Databases require you to have stateful sets. So a stateful set is a 1.5. It used to be called Petsets. Some people are happy about that change, and some people aren't. Stateful sets mean that there is a shared storage or persistent storage for a container. So if you have a container that must maintain state, or in the Pets versus Cattle analogy, must be a pet, you can have that container keep it state. So a database is a really good example of a stateful set application. And for us to do databases in this model, which we're doing, you need to have stateful sets. And then those have to be backed by a persistent storage, so what we actually end up with is Ceph for that. I'll show that in the next slide. And then one of the things that was really important for the AT&T crew, and I think this is very useful, is there's multiple sources for the containers that include Open Stack. So in everything we're describing, we're talking about container scheduling. The containers have to come from somewhere. So somewhere somebody is building Open Stack containers that have the Open Stack Python code in it and are packaged according to the semantics of the Helm charts. And so somewhere that has to be done. And if you want to run this on ARM, somebody has to build them for ARM. And so we have an interesting place where if you want containers from a trusted source, then you're going to need to be able to have a source for the containers. So containers don't magically show up like mushrooms. They actually have to be built and maintained. So if somebody patches Open Stack, then somebody has to create a new container before you can then roll that container out into your infrastructure. So that's the primitives. Hopefully that helps translate some of what we're talking about into Kubernetes speak. If you need more Kubernetes, Lexicon, we can talk about that. I'm happy to help with that. And there's a lot of things that we really have to do that aren't solved problems yet. I'm going to flip to the next slide. And we'll talk about this. So this is an exploded version of that architecture. And it's designed to show you a lot more details about how these things work together. So Kubernetes is a platform that has its own control infrastructure. So there's a control set of control nodes. If you're running in production, you're going to have three of them, at least. You might separate out at CD and all sorts of things. Helm is an application that's going to run on those controllers. It's not extra nodes. This is not meant to be a node mapping. By the way, it's a logical mapping. You have a set of Kubernetes workers. The Kubernetes workers are going to be what Open Stack actually runs. So the idea here is that I could have a Kubernetes cluster and run the Open Stack control plane in that cluster as a general capability. And Kubernetes would schedule it and keep the database up, keep the message bus up. All of the Open Stack management components would be running. Because I need stateful sets, I need SEF installed first and wired into the stateful set. So I can then use SEF as that stateful set back end. That was something I thought would be a lot harder. This came in actually pretty quickly. I don't think it's a production grade SEF cluster yet. To do that, you actually need physical. That's a longer story. I will take that as a question afterwards if people are interested. But there are challenges like, how do I create a production grade scaled SEF in a Kubernetes cluster? It's a great application for Kubernetes. And there's a lot of people who are interested in running Kubernetes to maintain the SEF infrastructure also. That's not an Open Stack problem. It's a SEF problem. But it's really interesting. And the same thing is true with software defined networking layers. So we need to be able to run software defined networking in a way that attaches into the infrastructure correctly. Kubernetes has its own software defined networking stack. And by default, containers are going to go through that stack. So if you're running Neutron with Kubernetes, we now have a conflict between how Neutron wants to interact with an SDN layer and Kubernetes wants to interact with an SDN layer. And that will be a challenge. In some cases, you could use something like Ramana, which is a flat networking technology in Kubernetes, and then just let Neutron do its thing. It could be that we actually want to bind to another NIC. There's a lot of ways to solve this problem. Some of them are going to be more elegant than others. But definitely workable problems, but they're not solved yet. So when you look at this chart, I'll go back one. We have to be able to handle networking. We have to be able to handle storage. We have to be able to deal with the fact that OpenStack expects to have configuration in a way that is different than what Kubernetes expects to have configuration come from. Kubernetes really doesn't do that well if you drop a lot of files, because files are persistent objects, files into your configuration space. We don't want to be mapping a file on a hard disk into a container to make it run for Kubernetes. We can, but that then drives a whole bunch of bad behaviors in Kubernetes. So there's challenges that we have to resolve in doing this. Now, if somebody came up, my company's been playing with this. We have some demos. I have a video. I can't do a live demo for this, but I have a video that shows doing a one-click laydown Kubernetes laydown OpenStack on top of it. This is achievable, I think, with a couple months of work, especially with the progress and interest, that we'll actually see this being a very practical deployment. But Kubernetes doesn't have any knowledge of the infrastructure. So to make this work, we're going to have to poke infrastructure information into this Kubernetes deployment and then have that drive the Helm charts, which then drives the deployments. So there's work that we need to do to make all this stuff go. I'll do the gratuitous plug for what Rackin does. We do an open source project called Digital Rebar that collects a whole bunch of infrastructure information and then can inject it into installs. So it's possible to do that if without something like what we do, you're going to be taking Ansible roles or hand crafting Helm charts to have that infrastructure information in it. And this is one of the places where Kubernetes is a stretch for OpenStack. Kubernetes job is to make you not care about infrastructure. It really hides a lot of the infrastructure information that you would want to do a good OpenStack deployment. If you're building a Ceph, you want to know where the SSDs are because you're going to need to put your caches on SSD. You're going to want a JBOT array and you're going to want the drives enumerated to build that Ceph cluster. Makes perfect sense. None of those things are going to show up in Kubernetes. It doesn't have any infrastructure information. So you're going to have to add it into the Helm charts or add it into the configuration when you bring up that Ceph cluster. And then you're going to have to know which nodes that Ceph cluster is coming up on so it has the correct drive information. And then, I told you there's work to do, and then we want to do this in a way that's generic because it's not helpful for the community if you can spin this up in your lab or in your facility and you've handcrafted Helm charts and somebody turns around and makes an improvement and you're forked. So part of the goal for this, a huge part of the motivation for doing it is that if you're using this approach, we want to be able to have everybody benefit from improvements within the community. We're actually moving into a place where the Kubernetes platform allows us to start sharing operational lessons and improving together as a community. If we start forking it to make this stuff happen, we've reduced the benefit for that. I don't want to see us have operational chaos version two where everybody's doing Kubernetes on OpenStack but we're all doing it differently. We really want to be able to find ways to bring that work back in as a parenthetical aside. If even in the Kubernetes community we're struggling with this, just like OpenStack struggled with it. There's a list of over 60 different ways to install Kubernetes. It's not really quite that bad but there's a significant number of people who have picked up different ways to do it. My company's working very hard to not follow that pattern and that's why we use the cargo. Ansible playbooks straight out of community. We do not want to have any custom Kubernetes deployments at all. We don't want to have any custom OpenStack deployments. From our perspective, a distro is an anti-pattern for the core aspects of this technology. Everybody we talk to wants to stay in upstream for something like a deployment or the core bits and so we're working very hard to do that. And I think it's important here. I have one more point. So configuration, so the idea here is immutable infrastructure. How many people are familiar with the term immutable infrastructure? Awesome. Wow, this is on a great curve. So the idea here is that part of the challenge with this, part of something that I think it's gonna have to come back upstream for OpenStack is dealing with immutable infrastructure from a configuration point. So we need to make OpenStack configuration not rely on configuration files that have to be injected into the infrastructure. We need to be able to spin up a service, have it pull in its configuration. I believe Keystone v3 has some of those capabilities and we could leverage that. But this approach as it gains momentum will put stress on that effort. And I just spent a whole bunch of time talking about this so I'm not gonna try and reread the slide. What I do wanna emphasize is the point with this is to try and get OpenStack into a more operational point. We have to look at what's been going on. We have not been converging operational practices into a single set of playbooks or roles or technologies. We continue to have different camps of deployers. This is potentially an N plus one problem but what I've been seeing in the field in my conversations and in the hallways here is that there's a significant number of people who are moving their deployment patterns and plans into Kubernetes Helmchart approach. Which means I'm very optimistic that we're gonna see a convergence in the community where people are actually working on this approach and making this go. The other thing that makes me excited is seeing Kubernetes on metal which I think is a logical conclusion of where people are gonna go. This approach does not work if you think Kubernetes will not run on metal. Kubernetes is the base. It's gonna be the thing running on metal. All right, I've talked through all these other points so I'm gonna do it. I'll upload these slides to SlideShare. I'm Zecal online so I'll tweet out this and if you see me on, you'll see it. So operability is not solved by this aspect alone. We can't just rub Kubernetes all over OpenStack as much as we're trying to right now and have it solve our operability problems. Operability is not just deployment. It is logging and monitoring and all these other important pieces. We have to provide leadership, technical leadership has to be motivated to solve these problems. OpenStack is going to have to prioritize work that makes this easier to do and that's gonna disrupt other plans or people who want this to happen are gonna have to show up in the community and fight for it, same as always. I think we have to resolve or accept the messaging confusion because there's no doubt that this messaging will push Kubernetes off or OpenStack off its pedestal to an extent. If you're listening to the sessions and keynotes and not already thinking that OpenStack is being pushed off the pedestal then listen more carefully. I think we're already in a position where OpenStack isn't the data center operating system at this point, right? We are part of an infrastructure in the data center. I don't think anybody gets to claim the data center operating system may so be damned. And so I think there's collaboration. I think Kubernetes ultimately will have a larger footprint. This was really controversial when I said it six months ago and I wanna support it now even so. Kubernetes runs in Amazon, Google, on OpenStack, in Azure, on Metal, on Raspberry Pi's. It is showing up everywhere with very little friction, which means that developers are gonna target this platform and use this platform and it's gonna actually have a bigger footprint than people who are gonna run and deploy OpenStack. You already can see that. And so it is very important if you take nothing else away from this it should be the oh shit, I better be figuring out Kubernetes or some other container scheduling system because that is gonna be the first thing that I use as an abstraction layer. And then OpenStack is gonna have other benefits around that for helping me with VMs and things like that. But this is a bigger Kubernetes is gonna be a bigger thing. And my prediction that I'll leave you with is in 2018, so next year, next time we're talking about OpenStack installs in the North American continent, I predict that Kubernetes will be the install method that people are using. And I have data, I can't share all my data, but I have good reason to believe that's true. Cool, thank you, appreciate it. I really don't have any time for any questions, do I? Is there any one question fast? I'll take them at the front of the room then. Oh, I do have one, thank you. So the question is why can't you use config files with Kubernetes? The problem is that you don't know anything about the actual running instance of that container beforehand. So you can't inject configuration information about where it's gonna, what an IP address is or it's name or how much memory it's gonna have, maybe memory it's gonna have, but all of those things are actually done dynamically by Kubernetes when that container is started. And it has to be done dynamically because Kubernetes could turn off that container and move it to another machine, change its IP address, change the name of, change all sorts of aspects of what that system's doing. So if you expect to configure that container, you are making a decision that you know what that container is gonna do and that you don't. And so you really have to give up the idea that you know ahead of time what's gonna happen with that container. Now OpenStack requires us to do that, so we pin containers to machines and we can fudge it, but it's fudging it. It's really hurting, it's actually hurting the long-term capabilities of how this stuff would work. So I hope that helps. It's a great question. It's hard to get your head around. And one more. Actually, there is a very elegant solution for that problem with the Kubernetes environment. You use the init container. So before your main process starts, init container runs before, then you can grab from the config map which is mount to your port, whatever configuration information you need and just apply to the process. Because when the init container starts, you can get your name, host where you run, IP address, pretty much anything. It's available. You push down to your main process. Your main process comes up with the configuration it needs. So configuration issue is not the issue with the Kubernetes. To me, that's a workaround. But I agree with you. There's ways. No, no, it was designed that way. Okay, good. So one of the things I love about giving this talk is we're consistently finding places where either I didn't know something that we could do or we're finding people are solving these problems much faster. So this has been a really exciting, fast-moving project and we're gonna continue to see progress and evolution, which reinforces my point. Thank you. Appreciate the time.