 Good morning My name is Mark shuttle with I lead product design and development for canonical and I'm the founder of the Ubuntu project We'll be talking a little bit today about different types of containers and And specifically the integration of native containers on bare metal with open stack My colleague James page. I'm James page. I'm a technical architect in our open stack engineering team And I've been working with Open stack on Ubuntu for probably as long as that existed. So hopefully I can talk thoroughly about most things. All right so What is a container? This isn't a container This is a traditional physics physical Linux or Unix machine And it has a couple of distinguishing characteristics First it has an IP address To the red dot. That's an IP address Second it has its own disk right any process running there talks to the disk and all the processes running there They they feel like they're talking to the same disk and then it has a set of operating system processes You know when you when you install a fresh Sentos when you install a fresh Ubuntu and you turn it on and you say psax you'll see some stuff Right and that stuff is providing services to the applications that you're going to install So for example syslog is running there apps can just send log information to syslog And that will then do whatever it needs to do with it In it is running Cron that will essentially generate Trigger batch jobs to run at particular times If you just do psax on a fresh Linux system You'll see a whole set of processes and that's really the operating system, right? so we consider a machine to be a Construct that has it's addressable. It has an IP address It has its own sort of disk space It has all of these operating system processes providing operating system services effectively And I can install applications. I know how to install applications. I know how to keep that whole thing Fresh I can patch manage it. I can administer it. I can update it now This is we've been doing this for 30 years, right? This is all standard stuff And then along came some clever folks and said well, let's recreate that let's slice up that machine By introducing So so the red process there is the app, right and system administrators know how to install that app They know how to operate that and keep that whole thing secure So clever people came along and said well, let's Slice up machines so we can get more machines without buying more hardware and let's do that with virtual machines and so What do we mean by a virtual machine? Well again, it's a construct That looks and feels and operates just like a machine Which means it has an IP address. There it is. It has SSHD running there right if it's a Linux machine It has cron and syslog and in it and all of those background processes that you would expect and the beauty of this of course Was that nobody had to change any of their practices procedures or code, right? We could use virtual machines just like physical machines Right, and that's what made them powerful. That's what made them so useful. We could get them quickly We could get them on demand And we didn't have to change the app We didn't have to change the operator to use virtual machines Then along came Sun and Sun said, you know, that's great but what if we But there's quite a lot of overhead Associated with that virtual machine right to make that virtual machine. We're actually emulating Hardware right which is very inefficient What if we essentially just Created a construct which felt exactly like that But didn't have the hardware emulation layer, right? And so that was Solaris zones and they were designed to essentially create a new Solaris box instantly Because what you're really doing is you are telling a lie to those processes That they were their own machine when in fact they were sitting on the same kernel on the same hardware as Everybody else so that was Solaris zones and IBM created a team to bring that idea to Linux at LTC in Portland when IBM shut down that project canonical took on the responsibility of leading that project And that is now the Linux containers projects Lexi and Lex D and the the whole point of that kind of container is to give you a full machine experience just like a virtual machine, but in a container and so this is a kind of container that Reproduces everything we know and are used to about operating machines, right? So this is a kind of container where all of those processes that you would expect when you boot up a machine All of those processes are in that container. You've got syslog. You've got cron. You've got in it You've got a bunch of your your journals all of the background Processes that make up the operating system are there That gives us the very nice property that we can operate Off-dware install applications upgrade applications keep them secure in that machine container Exactly the same way as We do on a VM, right? So my existing applications without unmodified with my existing operator practices good or bad Right can now be containerized in this new kind of guest, right? And so that feels very familiar Feels operationally comfortable Right, but it does bring quite a lot of the past with us, right? Those applications are typically unmodified traditional Linux applications that are being installed there Then along came Docker and Docker has Docker said hey Why don't we if we're running lots and lots and lots of these applications and if they are essentially stateless Then we don't need All of those operating system processes in every container, right? We just need My sequel D we don't need syslog because we're not sending logs locally. We're sending logs over the network, right? So the Docker innovation Which we call process containers is essentially to shrink the envelope of the container Down to just a single application process. So you'll see this when you use Docker That one of the things you specify a file system, and then you specify a single command to run And if you go into a Docker container and say what processes are here, you will not see syslog You'll not see sshd. You'll not see Cron you'll not see The background processes of an operating system will see the files, but you won't see the processes So that has lots of interesting properties. It's a very useful new kind of way of thinking about software But it does require that we operate differently because if you just put an app there that's expecting syslog It's not going to find syslog, right? It does require that we think anew about how to operate That class of software and that's why you see this this Fantastic explosion of innovation around mesos around Docker data center around Kubernetes These are essentially operating frameworks to replace all of those all of the functions that those operating system Processes used to provide before the application process. Does that make sense? Okay, so machine containers next D and Docker containers Process containers sit right next to each other and in fact you can run Docker inside machine containers. So Docker can sit on top of lexd quite comfortably I can go into that machine container and run Docker there and then I get a container within a container, right? I get a process container inside a machine container. We'll look at that in a little bit So open stack what is open stack? Simplistically open stack is a spreadsheet for keeping track of guests, right? It lets it essentially lets me get VM sprawl under control It lets me harness and it lets me keep track of which VMs or which guests more appropriately Belong to who and what machine they're on and what IP addresses and disks and things are Associated with them or what projects they're associated with So open stack very comfortably sits underneath both KVM as you know it today, right where you have machines virtual machines and Lex D where you have container machines because Lex D container machines are designed to feel just like KVM guests, right? So the entire experience you can understand is fits very naturally there, right? On the right hand side, I've got the machine containers Lex D I get bare metal performance and low latency all the things people love about containers on the right hand side I get the isolation and the ability to have a different guest OS I can put windows in the virtual machines, but I can only put Linux in the in the Container machines effectively, but all of that can be managed by open stack And it's very important when you're using abstraction layers that you're abstracting the right thing Right, so open stack makes a lot of assumptions about guests that you can SSH to them For example that they have consoles that those are all true of Lex D, right, but they're not true of Docker Right, so I can't SSH to a Docker instance because SSH isn't there Right. What there is my sequel or the database process or Mongo or whatever I put in the Docker process, right? So I need a different operating framework for all those process containers, whether it's Docker Docker or unseal rocket or OS It doesn't matter and so Kubernetes is the one we'll touch on this morning, right? But there are there are a range of them and it's an area of great innovation at the moment Okay, so that's a picture. That's a picture of The bit of the history and also the semantic differences between these kinds of containers Unfortunately, if we keep talking about containers, then we keep confusing ourselves, right? So it's really good to talk about machine containers and process containers because then it's clear what has been containerized, right? And also how you would expect to operate that how you would expect to use that Okay, this is a slightly different version of this picture Bare metal at the bottom with a hypervisor and so it could be VMware could be hyper V could be KVM and Guests on top of that remember those guests are machines. They feel just like machines. They have all the processes of machines I administer them just like a machine So Lex D fits right next to that set because it to gives you guests, right? They're containers bare metal latency zero latency No vert effectively, but they are guests in every single sense We run Docker or rocket or OSID on top of the guests you can also do that on top of bare metal but we run them typically on top of the guests and That gives us all of these processes each process has an IP address and it can have some disks associated with it But fundamentally think of it as a process with an IP address, right? And we can again run Docker rocket OSID on all of those guests We can run Docker or rocket or OSID inside Lex D and on top of KVM, etc, etc, right? And so here's open stack managing guests and here's Measles or Docker data center or Kubernetes managing Processes right processes with IP addresses at the top VMs or really machine constructs at the middle level And then bare metal at the bottom. So this was a long way of just providing Context for the different kinds of containers and why you might use open stack in one case where it feels comfortable to use open stack In one case and why it doesn't in in in the other cases now our view is That this is exactly how people should think about operating Docker containers, right? It is a different layer Right, there are some folks who have a different view open stack is a big community a broad a broad church big tent And so some folks say oh no We need a custom set of APIs in open stack just to deal with those processes at the top And so we have projects like career and Magnum and all sorts of things that at the open stack layer are trying to Trying to bring those processes into open stack Bluntly, I think that's crazy, right? Bluntly, I think those APIs will never be adopted because they only exist in an open stack universe Our view is that what you want is you want the best of breed Process container layers, which are Docker data center Kubernetes MISOS and others things You can use on the public cloud right and on bare metal And you want those to be available to you on open stack and in fact they are there's no problem You can run all of those things on top of open stack, right? It's a different endpoint It's a different API, but that actually works because it's typically project by project It's a different set of credentials and users typically, right? And so this is really how we think functionally. It's all gonna work, right? Where we want containers that look like machines those will be integrated at the open stack layer Where we want containers that look like processes those will essentially be an application level construct a level above So that's how we'll talk about it today James So let's take a bit of time and and dig into lexd itself So lexd's been around probably two years or so in one form or another and what it does is it provides a network addressable RESTful API onto the underlying Linux container infrastructure on a single server. So like I said, it's network addressable It's a nice simple is being built from the ground up to be easy to consume easy to integrate and very very easy to use So if you install Ubuntu 16.04 and you get lexd installed by default lexd installed by default and the lexd commands talk directly to that API to spin up Containers manage images managing manage the underlying network on the individual host manage the storage all those Semantics that a hypervisor brings are represented in that lexd API and are designed to be super super easy to consume Lexd is designed to be really really fast So the time to spin up container is seconds so that the time from saying lexd launch to getting something you can log into Is a matter of seconds. So it's designed to be super fast And we're designed to be super secure as well So all the processes in a lexd managed container are not running as roots So once you logged into the container you you have the normal pseudo commands You have root within the container, but all those processes are wrapped up in an unprivileged user on the host So God forbid there was a breakout from the container all you've got is a Unprivileged user on the host So we've we've put a lot of security wrap around the actual container itself to make sure that it is it is very very secure And we've got some very hypervisor type features so you can snapshot containers So you can snapshot a container do some work roll it back that sort of thing you can Migrate containers between hosts and you can do that online So you can do a live migration from host a to host be entirely using the lexd API To push containers around your infrastructure to do maintenance to distribute load and all of these things give us a very hypervisor Type experience to lexd lexd is a hypervisor at the end of the day So the the key idea is that where you use KVM today You can optionally just use lexd right next to it on the same machine if you want And create guests that are actually bare metal containers with all the same Snapshotting and live migration type primitives that you would expect from KVM That allows us to to genuinely lift and shift Legacy applications from VMs To containers without changing the code without changing the operational practices and two stories for you a typical bank CIO Said to me. I've got 8,000 Linux applications and 10% of them will get touched in the next 10 years So seven thousand two hundred Linux applications running in Linux VMs today right are Trapped they can't become Docker Kubernetes applications because I have to touch them to make them that but I want to get them into containers So of the eight thousand 7,200 right can come straight into lexd today typically about 90 plus percent of applications make that transition Instantly right so box Which is a Silicon Valley company providing storage was using a lot of scientific Linux? And they did exactly this they moved Almost their entire portfolio of workloads into lexd successfully in a week So what do you get when you make that transition? Obviously you get much better performance much better density and much lower latency And it's essentially zero latency if you literally just lift and shift your applications You don't have to change the operating system It'll be a boon to typically underneath but it'll be sent us inside or scientific Linux inside or Demian another project we did with Intel was a super computing project and they took 1990s Linux code that we couldn't see from a super computer and Ran it in sun modified inside lexd on a boon to on a 2015 era super computer and they literally just copied the disks from the old Super computer into lexd containers and ran those on the new super computer and Everything worked completely unmodified no problem at all. So it's a really sophisticated It's it's really sophisticated, but it makes things really operationally simple. Okay Okay, so let's talk about the integration then of lexd as a hypervisor with open stack, right? Okay, let's do a quick demo. So did everybody catch the interrupt session in the keynote yesterday morning? Yeah, that wasn't a race honestly. It was all about interoperability of 17 different clouds So we're gonna look at that same workload. Can we switch the video over to the second laptop, please? Okay, so we've got the same Lamp stack deployed on this orange box here, and then we just talk about the orange box start off with this is a 10-note Cloud in a box and we've got open stack Newton deployed on here. I know you've gone to 1604 It is running. I think we've got Let me check Seven height. No, I can't add up eight hypervisors in that and they are a mix of KVM and lexd So you can run both hypervisor technologies side by side in the same cloud So you can consume both containers and KVM under the same API just by Selecting which hypervisor type you actually want to deploy your workload on so we've got three lexd five lexd Hypervisors and three KVM in that in that particular box then I'll talk about why we've got five and it's not equal in a minute but we've The lamp stack deployed so HAProxy MySQL and WordPress and you can see here we've got Four units of it got WordPress WordPress got two units of that for a bit of scalability So this is deployed on the KVM part of the cloud. So we'll have a quick look at what a Instance looks like and this will be pretty familiar with everybody. We can see processes running. We can see memory being consumed We can standard KVM guests with MySQL running in it You can see there's a couple of cores being allocated to the instance so open stack is applying constraints via the burden key and KVM Into the underlying instance and it only gets to consume those resources on on on the host I've taken that same Model and I've deployed exactly the same thing on the lexd hypervisors in the cloud So I've used exactly the same tooling which is juju here And I've taken the same model and I put it down on both parts of the cloud in exactly the same way So all we changed was the machine type that we asked for from open stack, right? So it's just a different Instance type effect to see from the same a open-stack cloud. So I can SSH to the machine I can look at the process listing which on a on a Linux container is much much smaller because you don't see all of the Kernel processes in addition to the user space processes so we can see in it So this log SSHD all the normal things you would expect on a machine But without the the process listing from the kernel as well We can see so just as just it does that look familiar Right that is just a Linux guest But it's a container right so everything you can imagine doing at a Linux guest That doesn't interact closely with the kernel in other words It's not loading kernel modules or anything like that. We'll just work in this container And and the applications in the container can see how much CPU they've got they can see it's a two core machine And we can see that it's got exactly the same one gig memory allocation as the KVM instance hat So lift and shift it entirely works You can move the majority of your KVM workloads directly to Lexi using exactly the same tooling as you're using against KVM today You can do the exactly the same thing Lexi Okay Yep, we can switch back now. Can we switch back to the other video, please? Can we switch video feeds, please? Thank you. So that's that is that open stack That's that open stack deployed and modeled on that machine with with juju You'll be many of you will be familiar with using juju to to operate open stack And you can see that there are two hypervisors deployed effectively Nova compute and Nova Lexi Which are these two boxes over here? Okay This is the underlying view that is that physical box in mass So those are the physical nodes and you can see they're just running the standard of one two operating system, right? There's nothing fancy over there and this is the Horizon I believe for that open stack so we can go in and have a look right and here are those instances and so you'll see both They're they're essentially all just instances inside the open stack. They show up in horizon Half of them are Lex D instances Lex D guests containers running a bare metal speed and half of them are virtualized guests KVM guests running at virtualized speeds And we should launch some instances. We can can we just switch to the admin tab and look at the hypervisor overview? I think it's the hypervisors one there we go So this view gives us the view of the hypervisors running in this cloud And you can see that three of them are registering as QMU and the other five are running as a Lex D so we can See via the the web UI for open stack But the different hypervisor types that we've got configured in this car And you can see the current workload on each of those hypervisors as well represented in exactly the same way between the two Different hypervisors that we're running. So this is now standard best practice for us. We commonly deploy open stack with a mix of KVM and Lex D and then expose different instance types so that the users of that cloud can essentially choose when they're going to get a container Or when they're going to get a physical machine Okay Before we move on to Kubernetes. I think we want to talk a little bit about How that how that feels operationally? Okay? So operationally it feels exactly like using open stack surprisingly So all the things you do on your KVM cloud today, you can do with Lex D as well boot reboot stop delete resize rescue for add floating IPs Lex D integrates into the same underlying SDM technologies as a KVM instance does so you can use VX LAN GRE whatever you want in terms of your overlay networking or your you know Mapping things directly into the underlying provider networks in your data center. All of that just works So it from a tooling perspective if you're using the open stack API to manage KVM now You could use the open stack API to manage Lex D now as well When I talk to people about what they want from containers There's a there's a very wide spectrum of opinions But I'd say the the first step the bulk majority first step is simply this I want to be able to get better Performance and density out of my open stack, right? I want to be able to run the same stuff the same the same way just with better performance and density There are additional benefits to taking the next step to process containers, right? operationally you get new primitives with Kubernetes or with docker data center or with mesos But you have to think about them and you want to essentially think about where you're going to get the benefit of that fastest and do it there either in your next application or by going back and touching some of your existing applications and a Lot of the magic of Lex D a lot of the work that we do at canonical on Lex D is providing operations safety guideline Safety rails effectively around these containers Because those processes are running on bare metal plausibly they can consume all the resources of the machine so things like Precise allocation of quotas number of cores amount of RAM amount of disk IOPS and so on that's been a key a key focus for the Lex D team, right? They're really building a hypervisor and hypervisors Essentially provide bounds on the resource utilization But the great story here is we have all the mechanisms of the Linux kernel to build on right? We were allocating real CPU time and so we can allocate we can bound that and cause that in exactly the same way with decades of benefit of Kernel Quo's capabilities right anything you can do to a single process under Linux anywhere in the world in terms of quality of service In terms of allocation of number of milliseconds of time of out of one second to a particular process or the biggest Delay in terms of real-time response before a process gets time or in terms of the total amount of RAM or IOPS or CPU time that can go to a process all of that can be applied cleanly and perfectly To Lex D guests so for example people doing transcoding real-time Applications high-performance computing really now have an incredibly precise hypervisor a hypervisor that has properties Unlike anything ESX or KVM can deliver And that means that Lex D is really taking off in places where people really care about either Control I want to control the amount of latency or jitter or performance, right? I just want to get all the bare metal performance And the ultimate expression of that is essentially people saying hey, this is a much better Way to allocate a machine a physical machine to somebody through OpenStack than ironic Right. This is a way for me to essentially give somebody All of the compute of a machine or half the compute of a machine but typically in some cases all the compute of a machine without actually giving them the ability to flash the BIOS on the machine and without losing the hypervisor primitives such as the ability to attach storage and attach Nectaric interfaces to those instances. So here we've got live migration of Lex D But the key story is the idea which is new in Newton as a contribution from us is the ability to tell the Scheduler that you only want to have a single guest that a guest is going to be the only thing on a machine So why is that interesting? if I give somebody an instance type in my OpenStack cloud, which is a Lex D instance and That is the only thing running on the machine Then I have effectively given them 100% of the CPU with no virtualization overhead Right, but I can still live migrate that to another machine So there are a lot of hosting companies or big data specialist companies or Analytics or data science or machine learning type companies who find that this is the best way for them to essentially Sell full physical machines by the hour Right all the minute While still preserving the ability to attach storage dynamically attach these things to software to find networks and live Migrate them in cases where they have to do physical maintenance on the machines And then of course the other benefit is is if those are untrusted users That container cannot flash the BIOS on the machine. It gets all the CPU it gets all the RAM It gets all the network it gets all the the disk capability, but it doesn't get Access to the parts of the kernel that would let it flash the BIOS Operational perspective that's really powerful story And so if you think if you're doing this on bare metal as a machine comes back from a cloud user You have to cleanse it you have to Reflash the BIOS you have to scrub out all the things that user may have done and you have to deal with Failures during provisioning that sort of stuff as well with this put the server up You put no valecti on it and that is permanently available as a fixture in your cloud for a tenant consume to consume as a complete compute resource Okay So benchmarking Okay, so let's have a look at how bare metal Lexi Nova Lexi Nova KVM stack up So I've completed selling this data from one of my colleagues presentation on on on Tuesday About big data and machine learning. So this benchmark is driven by spark and it does anomaly detection of credit card fraud So it does modeling and anomaly analysis to detect when Say someone goes to Spain from the US and buys a laptop or whatever it might be So we looked at that and we looked at the time taken to complete Both the the modeling and then the the analysis of a data set And the difference between bare metal and novel xd running on an exclusive machine So we're giving it the complete power of a single machine was about 10 and That difference we think is probably to do with networking The the novel xd instance was connected to a virtual tenant network that was not using jumbo frames And it's quite a data intensive process. We think was in further tuning. We can squeeze that even further With kvm. The difference was was considerable. It was almost twice as long on kvm as it was on on bare metal It's doing a lot of io. It's doing a lot of disk It's doing a lot of networking and it just took longer to do on kvm because of the overheads of that same again same story with aerosort the recognized benchmark for for Your hadoop deployment again about 10% difference between bare metal and lex d A little bit further away with kvm here You actually saw a lot of a number of errors with parts of the terra sort under kvm I think this one is this one is sub 2% and it's Because there's less networking here, right? So you're moving less traffic over the network with terra sort It's really focused on compute and disk Access as your as your bottlenecks and there's no overhead there effectively from from lex d So if we if we look at the look at a different dimension We're looking at latency here using kassandra, which is lots and lots of very small rights from from lots and lots of clients On lex d. We saw a very low latency figure 30 milliseconds. Whereas on the same stack running on kvm That was near 110 milliseconds average average rights latency and that translates directly into throughputs on your kassandra cluster at the end Day as well, so this is also super relevant for people doing Time sensitive trans coding or time sensitive applications or high performance computing where you need all of the nodes to effectively Complete in a predictable amount of time so that you can move on to the next stage of your calculation Okay, that's the through the the counter throughput figure for the exactly the same test so bigger bars there are better. Okay So opensack newton has full integration of nova with lex d. It's worth upgrading to newton for exactly that feature It's an amazing capability and it really changes The relationship that you have with your users, you know, they they they notice it immediately, right? It's an incredible shift in performance And and the the sort of the react the responsiveness of the cloud to them, right? We don't think we launched any instances, but they launch really really fast if we have time we'll come back to that Okay, so that's Machine containers they look feel operate just like machines if your app is not Fiddling with the kernel it will go straight into lexd and never know the difference right for talcos Networking apps often do work in the kernel But everything else typically doesn't really care what kernel it's on and doesn't have specific interactions with the kernel Right, it just needs a Linux kernel um, so kubernetes now kubernetes is is one of the three major ways to to Coordinate those process containers sitting at the top and we want to talk just briefly about that I will be doing a an in-depth look at kubernetes and at operating kubernetes across public and private infrastructure Virtualized and bare metal later on today, but just very very briefly So again our view is that process containers are a completely different thing They don't really belong in the infrastructure as a service Which is all virtual machines virtual disks virtual networks They're essentially a layer on top and you want to give work groups teams projects the ability to get kubernetes on demand for a project To get the version of kubernetes that they want on demand for a project And so this is this is upstream kubernetes. This is again modeled and operated with juju Those charms those juju charms are upstream in the kubernetes repository on github and this um this model can be built trivially On any kind of machine oriented substrate that you might have the mware bare metal open stack All the various public clouds mas there is It's the physical cloud effectively. So that's showing that I can build kubernetes on bare metal um, and I think we have let's go have a look so On top of this cloud so uh on top of those clouds. We had a bunch of of instances um, and What that is that kubernetes? Right modeled I can show the instances here. These are now kvm kvm instances On that open stack cloud and on those kvm instances. We have built a model of kubernetes So here's the service model. This is the service view effectively the different applications This is the machine view. This is looking from the bottom up and saying what virtual machines do I have kubernetes is installed on those and which which services essentially are in which containers so Easy rsa is a is a key management service effectively. It'll distribute keys between those various things this Deployment of kubernetes is using those are three virtual machines providing xed There's log stash and cabana there for for monitoring. I'll show you that in a second We've got a load balancing agent there and then the kubernetes worker and masters If I go and have a look this is the kubernetes dashboard So here you see essentially the workload workloads that have been deployed now This these are process containers and clusters of process containers that have been deployed onto that kubernetes So I used so from the bottom up maz modeling physical machines juju and charms of open stack modeling an open stack cloud On bare metal machines with lexity and kvm as hypervisors Then again juju on top of the kvm guests modeling kubernetes and now kubernetes modeling in process containers engine x and various other Um services if I wanted to do sort of management and monitoring on that This is cabana, which is fantastic cabana top beat file beat an amazing sort of monitoring system, but I could I could trivially integrate Nagios or munen or any other sort of monitoring framework into that kubernetes We just like cabana. So here you can actually see what's going on inside those vm's in cabana okay Just to show you What it feels like to operate kubernetes here say I said okay. I want to have nagios. So I could go and get nagios and deploy that and then I need an agent and then I can just connect that agent to my kubernetes machines and to ah Monitor top one. Ah, I'll need a different agent because I've got two different series of open to there, but Deploying that then essentially brings me brings me nagios into the model and in time essentially I'll be able to see the same kubernetes hosts Through grafana and cabana and through through through nagios So that was a model built on open stack Actually, that was on amazon This is this is kubernetes now exactly the same model of kubernetes, but built on amazon, right? So you see the ip address is up there This is kubernetes on amazon. You see it looks exactly the same This is the same dashboard view with the same set of docker processes effectively modeled in kubernetes, but on amazon and this would be top beat cabana Looking at the amazon vms providing that kubernetes service So you see how we've got exactly the same experience for kubernetes on open stack on kvm and on aws, and so we think that Is a really powerful way to get access to the best of breed stuff from the public cloud for process containers mesos docker kubernetes in the open stack world, right rather than Open stack specific apis which only which are only usable inside an open stack context Which are only useful in an open stack context and which require to have operational tooling That's different for your open stack premise environment Versus your bare metal environment your vmware environment and your and your public cloud operations tools and now I'm going to go and finish that nagios integration James I think we have time for Do we have time we're out of time But if you have questions, we'll be happy to take them after this. Absolutely Thank you very much. Thank you