 All right, I think we're live. OK, cool. Well, welcome. I think I'm one of the last sessions, so I'm not going to hang around. I mean, I'm going to go quickly. In this talk, I added in the title it said delivering applications together. But I had a thought originally that I put better, but so I put better in parentheses. So I've added better to the title, so it's even better now. So how many of you guys are, how many I can barely see because now these lights are actually wildly bright, but how many of you guys are container experts? Like raise your hand if you feel like you're a container expert. OK, and raise your hand if you think you're an open stack expert. All right, so a few. OK, good. So this should be good. I know where to keep it, though. OK, so what I want to get out of this is I'm going to kind of go through some basic history of some things, and then I'm going to talk about some fundamental concepts in a way that I think are probably different than you've heard them portrayed before, because I want to demonstrate essentially that to you guys, I want you to leave with the conclusion that actually open shift and open stack work very well together, and you actually get a lot further if you pair them together than if. So let me start with how many of you guys would fall in the top category? Like anybody raise their hand? Like OK, and how about anybody in the bottom category? OK, so a couple. All right. So my goal at the end of this is to kind of tell you that actually I think they really do provide unique things together. I think it's about fundamentally exposition of resources and consumption of resources. So if you think about an operating system traditionally, it's a kernel and programs, right? And the programs ask for resources, and the kernel provides resources. The kernel lights up the hardware, and then the programs, a program without a kernel is what? It's a file, right? Like it's nothing. It doesn't exist. It could, without having resources or access to memory, to even exist, fundamentally it needs resources. So historically, I would say this is what exposition of resources looks like on the left, right? So you'd bring in another computer, plug it in, sit it down, and here's more resources have at it, right? And I think if you look in the modern day, it's softer to find, right? So now I just define what I want as an administrator, give them access, and then they go do whatever they want. Obviously much better. Also probably should be starting to think through that fundamentally this is not what OpenShift does in any way, shape, or form, right? Like it's not this at all. It doesn't expose resources in any way, shape, or form. And when it's exposed resources, I mean, it doesn't expose the hardware resources, right? It exposes, you obviously have some ability to go in and collect resources, but OpenShift itself doesn't expose them. What it fundamentally does really is it consumes resources. So you have, if you look back in the day, I joked, this is a PS tree off of my local laptop. And I was gonna have a picture of some developer eating up all the resources. Back in the day, I think we thought about that was what was consuming resources. We're like these damn developers that eat up all the resources. And nowadays I think we're smarter. We realize that actually without the developers we don't have applications, and applications are why we have jobs. So at the end of the day, I think the app is now first class and we understand that. And so that's really what OpenShift is about, right? It's about exposing the application or the components necessary to build the application and then going and asking somebody else for those resources. So it could live on the hardware, but honestly at the end of the day it's asking somebody for resources. So now before I get into that, I just wanna go deeper into the composition and exposition of resources. I wanna talk about, it took me many years of research to come up with this very sophisticated analysis of what containers are. So at the end of the day, this is what containers are. They are fancy files, including a fancy file server called the registry server. And then when they're running, they're fancy processes. And if you think at the end of the day, and it sounds kind of maybe cheesy, but at the end of the day, that's really all they are. If you think about what a program is, a program is just a single file living on a file system somewhere, right? That's at rest. But when it's running, then it's in memory and that's a running process. And so I always joke, you can actually use the word process. So whenever somebody asks you the question, is blah, blah, blah good with, are containers good for blah, blah, blah? And I say, okay, well replace the word with process and then ask that question again. So are containers good for databases? And you go, well, are processes good for databases? And of course fundamentally the answer is yes, because what else is there? So that's why I say they're really just fancy processes. Instead of a fork system call or an exec to go back to the Unix days, you're just using a clone system call. And so this clone system call creates essentially a virtual set of data structures in the kernel that are then consumed inside of that process. And that process thinks it's running in its own operating system, but it's not. And on disk, now instead of having a single file, it's multiple files and it's packaged together in a Docker format. And then if you think about the real magic of Docker was it was the push, the Docker push is the magic because I can pull down an LXD image or an LXC image, I can pull down any format, I can pull down a VMDK, but how do I push it back out once I've made changes to it? It's like, okay, well I can SCP it back to the server and then pull it back off a web server or I can SCP it back and forth. But having the layers and having an actual protocol in the registry server is what really makes it a fancy file server. So it's typical CRUD application, right? And in fact, that's what I would say separates it from Cinder too, you know, and or Swift in general, like some of the ways that open stack works, like you don't really want people pushing new layers of images, it's just not fundamentally the way it works. It's merely read only. And then I wanna talk about, you know, and I'm building up to some crescendo here, but the idea of, I think Docker had it right when they mentioned that standardization was kind of part of the, I don't know if you guys saw all the old slides. This is probably three-ish years ago where they would show like a piano and a barrel and you know, like a truck and like all these different things and they were loading them onto a ship. And they would said, well it's hard to load because they're all different shapes and you know, containers give us the same standard shape, right? It has the hooks in the same place and we can load them. And I would say that's part of the value proposition. I think that gives you the hooks and you know, so the cranes here in this picture, like that gives you that, but fundamentally there's another value proposition that I don't think a lot of people have highlighted and I think it's that we're actually packaging at the factory, not at the dock. And so I'm gonna walk through this. So in 1906, the top image, right? It took like 45 days to load a ship. So you would literally load up a truck manually, you know, people would manually load it and then you would drive down to the dock and then it would take you 45 days to load a ship. So from the time the ship came in to the time it left was 45 days. And in that process, imagine you were loading something very fragile like lamps and you break a few lamps and it doesn't quite work, right? This is an analogy, right? The program doesn't quite work, Greg, when I get it at the dock. So I fire up a VM and I run Puppet and something dies halfway. And I'm trying to scale up an application and I've gotta run like 20 more nodes and I've gotta run Puppet on 20 nodes and bring them all up and that's a pain in the butt, right? So we do all these things to try and bake it at the factory. We will then like run Puppet, save that AMI or save that, you know, glance image, push it, you know, and then try to move that back to the factory and then modify from there. And we've done all these things to try and hack to do that. But at the end of the day, Docker what it really brought was natively we were packing it at the factory. So you can essentially, again, build it on your laptop, pack the container. If a few lamps break, I could figure it out there, right? I can keep reloading the container until I get exactly what I want. Then I can put it down on the dock and then the sys admins just have to load it on the ship. There's nothing, there's not much. We remove a lot of that pain, at least in the user space perspective. So we've got a pristine user space where everything is set up exactly the way the developer wanted. And that's fundamentally different. So I say, let's build at the factory, not at the dock, right? Because building at the dock sucks. So, okay, so you've got, build it at the factory, now kind of understand why we want containers, but let's look at the evolution and why we really get into open shift and open stack. So I mentioned before, you know, processes are fundamentally kind of the basis. It's the standard data structure that we use to run programs, right? And I joke, you know, is there really anything else, right? So, you know, is it good for databases? Yes, of course, because processes are the only thing that we have really. And unless we get into unit kernels, which we will not go down that mark, you know, we will not go down that route. But again, at the end of the day, you kind of look at the bottom row here, it's exposition of resources, right? So the kernel exposes the resources, whether it's a container or process, it's still what lights up the hardware. And if you look at the consumption, it's a process, it's a container, it's a bunch of containers, it doesn't really matter. At the end of the day though, when you get into a distributed systems environment where you need a lot of hardware lit up and you need a lot of fancy processes orchestrated, you know, fundamentally open shift and open stack do different things. You know, open shift is very good at orchestrating those things that were built at the factory, you know, as opposed to the things that were built on the dock. And I would argue, you know, open stack is pretty good at building stuff at the dock. I mean, with heat templates and things like that and puppet and you can bake AMIs in, you can start to try to move the work back to the factory, but it's a lot harder because you don't have the layers, you don't have the docker files. There's a, it's fundamentally, if you bake all that work back into a box, it's like a box that has no manifest, right? You can bake it into a glance image, but then you don't know what's in it and then you have image sprawl and you don't know what's going on. So fundamentally, I think open stack is very good at, at exposing the resources. I think open shift is very good at consuming the resources. And so, you know, getting back to my question of, well, why wouldn't I just use open shift everywhere, right? Why wouldn't I just dump every app in open shift? So since we've decided that building at the factory is better, let's just use that for everything, right? Well, I mean, I go through a few reasons where it actually falls down a bit. So, so I came up with a tendency scale. I just invented this last week. And I, well, I actually invented this like a year ago, but I finally wrote it down. And I used nice red hat icons to try and show you guys, you know, in a pretty way. So, so people will ask me again fundamentally all the time they'll ask questions like, is it, is a container good for my database? And I'm like, well, a container is nothing more than a fancy process. So yes, but it's really about tendency, right? So if I need a database, you know, and it needs to be separated from other, you know, it needs to be isolated from other processes. How isolated does it need to be? You know, I'd argue on the right of this scale, you know, oh, it's fine. I don't care. Like back in 1998 when I started, you know, I would just tell that into a web server and like I could see other people's files and I could do a PS and watch their processes running. And that was totally fine back then. Like literally that was how things were done at universities all over the country. You know, you'd tell them that in and process isolation was enough. On the other side of this scale, I'd argue, you know, as you kind of build up, you know, maybe I don't, maybe that's not quite good enough. Maybe I need a container. I don't really want people to be able to see each other's processes, you know, but then really honestly, maybe that's not good enough. Maybe I need a VM because I'm worried about noisy neighbors, which we've started to be able to handle some of that. But maybe I'm worried about somebody privilege escalating inside of a container because if we're running user namespaces, we have the ability to privilege escalate as rude inside of a container. So maybe I wanna actually kinda actually segregate by VM. But maybe that's not good enough. Maybe I need a physical server because I'm actually worried if a physical server goes down and we don't have good enough scheduling or where the VMs are. Or maybe I need one in a different rack or maybe, you know what, actually at the end of the day I want it on two different continents with different weather patterns. Like that's the most isolation or maybe we can put it on the moon. I don't know. I guess I could put a little moon in an earth. We could try and isolate it even further. But obviously that's fundamentally gonna be pretty expensive. I don't think anyone really wants to go quite that far. But I literally have had the question, I worked in data centers for a long time and we'd literally get the question, can it be in two different, where's your other data center at? I need to know is it in another weather pattern? Is it gonna be affected by earthquakes? So I'd argue it's about fundamentally, it's about tenancy and isolation and trying to figure out what your risk, what your level of risk aversion is essentially. And then another challenge if you will to containerizing is this is another one where I've invented the code configuration and data. So fundamentally if you think about what's inside of a container image. And in fact they're repositories but for the point of this talk I'll keep it higher level and say they're images, right? They're actually layered images that we call repositories. But for the point of this, the thing that we want to put in those containers and Docker containers in particular, right? I saw the LexD talk and actually I thought it was pretty good but they would preach that you can put anything in a container. And actually Red Hat doesn't completely disagree with that. We're working to put system D in a container but still fundamentally I think we do agree that you should only put the code in the container, not the configuration and not the data. So and I would argue that LexD says put it all in there, treat it like a VM and that's fine. But now you're packing at the dock. You're back to building at the dock instead of at the factory and so that's kind of fundamental. I think there's a happy balance with running multiple processes in a container, you know in a Docker container but you still separate out the configuration and the data. And I would always say this, it's fundamentally about code configuration and data. So I want my MySQL D to live in the container, right? And so then if I pull, if I do a Docker pull, I'm only pulling the code, that's it. If that thing dies, if it runs, the code died. Okay, I restart it, boom, my configuration of my data lives somewhere else. Kubernetes fundamentally, OpenShift fundamentally can pull that back in, you know, whether it's exposing it through secrets or exposing it through volumes inside, you know, persistent volumes, things like that. But fundamentally now the process can run anywhere. The fancy process can run anywhere and it will get access to its data and configuration again. We want to fundamentally separate these things. And I say other stuff because I actually have a blog entry coming out that actually, I have one that's out recently that I forgot to put in the links to this, but I will before I make this public. There's a whole bunch of other stuff too you have to worry about. It's not just that, but I would say that's the minimum set that you need to worry about really fundamentally. It's easy to get like etsymy.cnf, you know, varlib.mySQL, you know, you can know where to put those things. But like satellite six is a little harrier, right? Like it dumps stuff over the file system we're not exactly sure, you know. Like maybe I click in the web interface and it drops a file somewhere. I don't know where that goes. Don't worry, that's gonna be harder to get into a container, you know. I'd also say there's a whole bunch of other caveats. There's licensing. So wait till you put some kind of licensed code, proprietary licensed code in a container image, ship it out to a registry server and the wrong people pull it down and run it and your company owes like $2 million in licensing, right? So like fundamentally there's some licensing issues you gotta think about. There's also the installer issue and I have a whole list of them that I go through deeply. But the installer issue, if it doesn't have a clean installer. So fundamentally I'd argue Docker assumes all kinds of things. Like you can install the program in some fairly easy way with RPMs or DEB packages or something like that. The code, the configuration and data can all be easily identified and I know where they're at. It also assumes that the installer is normal like an RPM or an app to install. But if you have like installed.sh so I was literally talking to a customer a few weeks back and essentially they said, well we have this really nasty program and I won't name the program because I don't wanna dog them, they're a partner of us. But they had a typical Unix install with an install.sh, right? How do you get that into a container? So I can fire up a shell, get into it, run the install or see what happens, save it and I don't know what happened inside but whatever it's good enough, stuff it in a registry and then people can pull it down and use it. It was a network scanning tool so it wasn't like a database or something like that, it wasn't like an app server. So you can imagine for a network scanning tool it's still pretty convenient to have it in a registry server and kinda just stuff it out there and let people pull it down and use it where they want. So there's a lot of other things to think about. Sometimes though, at the end of the day it's easier to just put it in a VM. I mean, so that's what I'm getting at. Sometimes there's times where it's just too hard to identify all the things. And so there's a lot of challenge in the next five, 10 years honestly in my opinion to getting there all the way. So for that, and it's my only solution slide at all if you will, you know Red Hat's view of the world well we have this thing called Cloud Suite for applications that we released last week and fundamentally it's about allowing you and if you kinda look at the middle three boxes the idea is that you can mix and match applications that are made of containers and VMs. So you can imagine a scenario where you want the front end in containers but we have some weird program, some message bus that's some proprietary thing and that needs to live in a VM because we can't containerize that. But I also need a database. Maybe one of them lives, maybe we need an object store and that lives in a container and we have a hardcore Oracle database that still needs to live in a VM. And so the idea is to help you kind of orchestrate these things cross VM, cross using open stack and open shift. And in fact, we did a bit of a design summit couple months back where we actually built this out with heat templates and with essentially YAML files that define the Kubernetes objects for open shift. And it's very possible to do this to where you would essentially build a cross platform app essentially. So now I wanna dig into some of the integration points of where open shift and open stack would work well together. And then I'm gonna do a little bit of a demo. So it works pretty well. So this is one, this in fact, this first example is what I'm gonna demo for you. So again, getting back to exposition and consumption of resources. So storage is a perfect example. The object type in Kubernetes, if you guys aren't aware of it, is there are these things called persistent volume claims and there are persistent volumes. Persistent volumes are exposition of resources and persistent volume claims are consumption of resources. You essentially say, hey, I need a volume that's five gigabytes and the system either says, hey, I have it or I don't. And at the end of the day, if the sys admin hasn't logged into open shift and provisioned a bunch of five gigabyte volumes or maybe five, 10, 20, 40, 80, and you can imagine you get into this flavor problem. If I've pre-provisioned all of these storage volumes ahead of time, now I have resource, I've alienated a bunch of the resources and they can't be consumed because if an app that only needed five gigabytes ends up getting one of the 80 gigabyte volumes, he's using too much and then other people won't be able to get access to resources. So it would be really nice to be able to dynamically go slice up those volumes as I need them and only give myself five gigabytes. But I don't necessarily want a sys admin to have to log into some virtualization system and carve off five gigabytes and then say, hey, here's five gigabytes. And the idea is I wanna fundamentally get away from ticket systems and requests and things like that. I want this to happen automatically. What just so happens that OpenStack happens to be really good at this, right? So Cinder and what OpenStack can do, it can easily do that. You send it a rest request, it gives you back five gigabytes, here's your volume, have fun, do whatever you want with it. So one of the ones I'm gonna demo is where we actually integrate OpenShift and OpenStack and where I literally just requested an OpenShift. So as a developer, I log in, I go, give me five gigabytes and I just get a five gigabyte volume and I don't know where it came from and that's awesome because at the end of the day it came from OpenStack. And OpenStack was smart enough to say, here's your five gigabytes, cool. That works pretty well. We're working on heat templates and in fact I have a link at the end of this that I'll kind of show you. We're doing a ton of work around integration and there's still a lot of hairiness around the network of what's right. People are asking me about Contrail and Calico, Project Calico and all these different things and obviously a bunch of partners, Midokura and all these different networking things. Fundamentally, if you just go and install OpenStack and then you install OpenShift on top of it, you end up with double layered networking because the OpenShift installer by default installs an SDI in essentially. So now you've got loss for that. So one of the ways that we've worked to do this is just to provision OpenShift through heat templates and then you just rely on the underlying tenant network and so then OpenShift, which is fundamentally Kubernetes gets its flat network that it wants. It doesn't necessarily have to be flat because again that can be hidden from it by OpenStack but at the end of the day that's kind of where we're going now and then so the open, so the heat templates essentially take the Ansible installer and run the pieces parts that it needs to install it on a flat network, which is pretty cool. That's kind of what we're doing now. It's kind of still a work in progress of where we're going with that. Another place that is interesting is the load balancer. So again, anything that was external back in the operating system days, DNS, load balancing, you can imagine like, you know, NFS volumes, things like that. All of that you need to rely on somebody to give you that stuff. So again, OpenShift is good at just requesting that and OpenStack happens to be good at serving that. So that's another place we're working on integration. I tried to talk about the install. Another place that's interesting is we can actually integrate Keystone but what a lot of people don't fundamentally realize is it's not actually that great to integrate OpenShift against OpenStack. So you can imagine I have a bunch of developers logging into OpenShift, right? And I don't necessarily want all those developers to have accounts in Keystone. I don't necessarily want them to be able to go down to the platform and provision everything. So we're kind of working on it to work in progress but I think that currently internally we're kind of at consensus that having people, having both OpenStack and OpenShift authenticate against a single AD server, a single LDAP server, it's probably a better way and having groups and then you can divide the users out the way you want and maybe a few power user devs can actually log into OpenStack and like troubleshoot things but in general, I don't think you want every OpenShift user to have access to OpenStack. And so that's kind of the consensus for now but it does work. We can actually go through OpenStack, we can both go to the same LDAP server. There's a lot of ways to configure it. And then auto scaling, so in fact, I don't know why it says resources but I guess the idea, we actually have some heat templates that we're building that will create a URL. There's actually some auto scaling logic built into the heat templates so that say CPU goes too high and I'll just add another OpenShift node. So that's kind of fundamentally like, that's kind of reactive auto scaling. And then we've also exposed URLs for OpenShift to eventually be able to go hit a URL and then tell it, hey, somebody just asked me for a thousand containers, I know that in general I need about 100 containers per host so give me 10 new hosts and go pre-provision those so that we're kind of setting it up in that way that kind of we can either reactively or proactively ask for more resources. And now it's demo time and I want to point out that I use this inception picture because what I'm about to demo is running on my laptop in a VM, it has open, actually let me start at the top, there's containers running inside of OpenShift, running inside of OpenStack VM, running on OpenStack in a VM on my laptop and I did that in only eight gigs of RAM. So, but every second is going to be 45 days like in 1906. So, all right, so. And hopefully this works, I've tried this a lot, so I love demos at the end of the day. So, so I want to show you what a PVC is. So we call it PVC, it's persistent volume claim. So this is an object type in Kubernetes that in fact I can make this. Can you guys read that all right? Or would it be better for, a little bigger? Okay, good, all right good because it would mess up my windows and it's so cool looking right now. I feel like a hacker, like it's really perfect. So, so you can see in this thing just, it's pretty simple. I didn't want to show you a giant Kubernetes template that would confuse you. The cool thing about Kubernetes is you can break these, all these different objects. You kind of build all these objects to build up an application. You can put them in one file, in fact, OpenShift has a concept called templates. You can dump them all in. But templates are really hard to read. They're like, they get chaos as they get very big. But the cool part is I can just go create this one PVC and literally this is not necessarily even part of an application. And it names it dynamic-test, you know, 10. And the idea is then I would program, I would then have another app that would come along and say, hey, I need this volume, you know. And so you kind of connect an app to a persistent volume claim. The persistent volume claim dynamically, you know, connects itself to a persistent volume. The persistent volume in this case, since it's running on OpenStack, you know, automatically goes, talks to OpenStack, OpenStack says, yeah, I can build you a five gig or one gig volume and then hands it back. So I'm going to now run that by literally, it is a pretty simple command. You just do OC create dash F and then we're gonna create this PVC. And if we're lucky, oh, perfect. So surprisingly, there it is available. Boom, went to available pretty quickly, by the way. I would say that's pretty sexy because that took like sub two seconds to go actually. So now in Open, let me show it, finish show you. So OC get PV, you'll see that what happened is inside of OpenShift, it created what's called a persistent volume dynamically. That persistent volume used the driver to go set up a cinder volume and then obviously automatically connected it together, which is pretty cool. And that happened like sub two seconds. So like I'm gonna develop, I log in and fired up my app, it fired up. Like the app's already up now. Like, and I didn't have to have a ticket to go slice off some volume on our, I need a lawn with one gigabyte volume. And then two weeks later, the storage admins get back to me and they finally add that thing. But I just did it in two seconds. So I think again, this kind of very clearly demonstrates the concept of consumption of resources and exposition of resources. OpenStack is really good at exposing these resources programmatically and dynamically even, which is much better than that little computer sitting in the room again. But OpenShift dynamically just needs resources. And you can imagine as I get into thousands of applications, so I have an error with my application. And my app might be made up of 30 or 40 volumes, right? It might not be just one volume like this. I could go fire up, even a simple WordPress example and every one of them needs a volume. So, or at least I need two volumes. But you can imagine much more complicated microservices apps where having this dynamic provisioning could literally just save a lot of life, a lot of time spent. So that's all I'm gonna show, pretty simple I think. And then I wanna say, what are my conclusions? So my conclusion is that essentially we kind of think about it in a distributed systems environment. OpenShift and OpenStack is kind of the new OS, if you will. It's again kind of the new kernel and the new process manager. And so there is scheduling that's happening in both places and it's very complicated. But at the end of the day, if you kind of fundamentally think about that, I think it clarifies like kind of what you can do with it. And with that, I have a few minutes left. Any questions? Does anybody wanna ask any questions? I can't see, so, sweet. So basically OpenShift is an abstraction on Kubernetes. So that's an interesting and very good question. So OpenShift actually exposes full blown Kubernetes. So Kubernetes is essentially a set of REST APIs. And if you look on an OpenShift master, it actually has a couple different URLs. It actually exposes full Kubernetes. It is essentially a rebuilt binary with all of the Kubernetes code in it and then extra objects added. So I could go into the weeds on that if you wanna go deep, but it has every single variable. So you can even run the, in fact, you can even run the kubectl command. Yeah, exactly. So I could do kubectl, actually, I think we even have it in here, kubectl get pvc. So boom, you can use the kubectl command. So I've got the kub, that's a kub binary and it literally went and hit the OpenShift master and just brought back the same data. So the basic question is why do we need OpenShift? Because a lot of things are happening in Kubernetes. The load balancer is coming, persistent storage, as well as networking policies all the way up to the hardware. Why put another layer? So it's not exactly another layer, it's the same layer, but an extended layer of that same layer. So again, it is, when you install OpenShift, you are installing Kubernetes essentially with extra objects and that's a very, very good question. So the reason why, so I would answer that simply as, like last Friday I was at container camp with Tim Hocken and some other guys and I literally asked them, so if you look at kubectl, if you look at kub12, what was one of the things that came in? It was deployments. So if you guys are familiar with replication controls, what is replication, I could get in the weeds about what these things are, but long story short is these objects that existed in OpenShift were essentially a breeding ground to then bring them into Kubernetes and there's more and more of that happening. The authorization stuff is looking at being moved into Kubernetes, the deployments is something that's moving to Kubernetes. There's work on web interfaces of what we wanna do there. I would say fundamentally the build config objects are probably the biggest thing. So I'd ask the question, how do you patch in Kubernetes? There's currently no way to patch the image. There's ways to patch the app fairly easily with some external building and stuff, but how do you patch the container? So I would argue with, in fact, I have a pretty kick butt demo that shows a cascading build where I can actually, I kinda show a sysadmin can patch the image and you don't notice anything changed and WordPress looks the same, but if a developer logs in, they can actually change the code. So I'd argue build configs are probably one of the main reasons you'd really want OpenShift. And then, again, you view it as a breeding ground for some of the best ideas and honestly anything in code, right? Red Hat's pretty conservative in what we'll add to the Kubernetes code base to make OpenShift essentially, but some of those fundamentally really good ideas are getting into Kubernetes as we speak and so it's like, you're kinda getting the best of both worlds. You're getting stability where we backport features, just like we do with RHEL with Red Enterprise Linux, but you're also getting kind of best of breed features that maybe Kubernetes isn't quite ready for. So it's kind of a weird balance because I think fundamentally we're used to an upstream and a downstream and the relationship with Kubernetes and OpenShift is almost like a cyclical relationship. It's like yes, they fork, make a copy of the code at kube12, snapshot that, add our patches essentially, create OpenShift, which is essentially a super kube and then test those with the Red Hat community, which is a very different user base in a lot of ways than what kube is and then pull the ideas they want in. So it's a way for us to satisfy our customers because there's honestly a lot of use cases that are kinda scary and new for even the kube world because that was pretty much fundamentally designed for web applications and obviously that is not all that Red Hat wants to do with it. We wanna do a lot more than just web apps. So there's some breeding ground going on there. So hopefully that explains it. Any other questions? You did such a good job with that one. I figure I'd ask the other side, but I'm still trying to understand why connect OpenShift and OpenStack at all. Why not just have a container world and then your OpenStacky world and basically keep them separate. I'm still trying to understand is that just gonna end up cleaner or are you trying to kinda cover all use cases here? Well, I think the persistent volume one that I hope to is clear is the one that honestly is the most, that one to me seems to make the most sense. If I just install OpenShift on regular VMs or on bare metal, I have to go provision those PVs somewhere. So if you think about what I have to fundamentally do is go log into some iSCSI server or a fiber channel server somewhere and actually provision those LUNs, get the URIs for those LUNs, create a PV object inside of Kubernetes, map that to that LUN, and make sure that the nodes in the cluster can access that fiber channel or that iSCSI. I have to go, I'll do that stuff manually. So it just makes sense that OpenStacky is really good at doing that particular piece. So it shows that's pretty cool. In the future, I can give you some examples of where I think would be really cool. And the heat templates show it. So scaling is another one, like I wanna scale up. So I wanna scale up the nodes. So again, OpenShift is getting hot. It's starting to say, hey, I'm down to 5% resource utilization. I need more nodes. And then you're literally gonna go touch a URL, tell OpenStack, hey, go fire up 10 more nodes, provision me more nodes. Another interesting use case in the future that I think would be cool would be even what we call hypercontainers. So having an OpenShift every time it goes fire up, you maybe have some kind of checkbox in Kubernetes that says, hey, I want a secure container that's in a VM. And then it literally goes talks to OpenStack, fires up a VM, which has Kubelet or OSC node in it. And then that automatically registers itself in the master and OpenShift is like, oh cool, my node's ready. And then goes and schedules the workload there. And so like those kinds of things where you see these dynamic interactions back and forth between kind of the resources that we need. And a VM is basically a resource, if you think about it, storage is a resource. Another one is network. Network is another one. So we get some really weird use cases again where people ask us, hey, we need containers that can only talk literally this port to this port on that container. Oh, and by the way, this one also has to talk to this one and this one. And they create this crazy graph map. You can imagine having some kind of SDN. You don't want to have two different SDNs. You don't want to have a SDN layer on top, but it would be really nice to go to configure some SDN and say, hey, open this port to this port up from this container to this container. And again, because if you think about that, from this process to this process on these nodes and having OpenShift go ask that to OpenStack would be really nice. And also if I fire up a new tenant, is another perfect example. Say, I've got a new dev team and I want to give them some playground where they can go kill themselves and not hurt anyone. Boom, you can imagine doing an OC new project in OpenShift and Neutron automatically goes and creates a tenant network for that project. That would be pretty cool. Those are the kinds of places where I would want to see more integration or I think it'll only get more, not less. Well, fundamentally, again, exposing those. A VLAN is essentially a piece of physical resource that OpenStack is really good at carving up in a software way. And so I think it would be really cool to see more interaction there. So any other questions? I think we are at time or even a little bit over. Oh, no, wait, we have extra time. We have like five more minutes. Any other questions? All right, if not, I'm done. Let's have, let's drink some beer. Thank you.