 Good afternoon everybody Gonna get everybody settled here. We're gonna get this right on the road. We're Crunched or a little crunch for time. So we're just gonna get rolling here. I want to welcome everybody to Cisco's fifth and final sponsored session of the day. We had four great sessions earlier today I'm glad you were able to turn out for our post lunch session This is the big deal. This is our premiere session with Lou Tucker Stephen Dake both from Cisco Michael Schmidt from SAP and they're gonna dive deep into the work that Cisco and SAP have been doing in the container space So with that Lou all yours Thank you. Thank you very much and it's great to have you guys here In fact many of you could probably give this presentation because I think the work that we're seeing being done in open stack With containers and Kubernetes is extremely exciting But and we started this work almost a year and a half two years ago When we were sort of fundamentally Trying to sort of answer the question. Can there be a real advantage if we start looking at these Technologies together. So that's why I wanted the title list talk really about Operational advantage when you start looking at these two major technologies and drawing the best from each one into the same integrated environment Those of you who actually deal with customers or your own deployments recognized it would be really nice if we can make this ultimately into a single unified environment and So we have a choice for the developer whether to be on a container if they prefer that packaging mechanism Or if they need the isolation and multi-tenancy of a virtual machine We want these things to to come together So in many ways, I think the people have Pitch of the sort of as a battle between these two technologies I've heard people saying well containers are going to be taking over open stack or that open stack And then we'll just be having containers on top. So what is the right model for this? These are two different technologies and it's analogous, you know, we've got in most major applications you using more than a single language I'm also vice chairman of the opensack foundation one of the questions We always are trying to address as Python. Does everything have to be in Python? or do we find a way to start embracing larger numbers of technologies and When we we have made decisions that we really do want to embrace Containers in this so I don't think this is a battle at all I think it's something that is going to be able to draw upon the advantages of both So to talk about it It's just to level set everybody for those of you who are not as familiar with with this area that containers in my view Are just an excellent way It's a to package up an application at all of its dependencies. So you can run that application on any server Independent of what the particular packages or or Python or what might be on the server you can bring those things with you And so it is a container just like a shipping container that you bring everything you need and you can run that on server But the same time you're leveraging everything that's underneath you So this means you got very very low overhead You can spin up hundreds of thousands of containers on a single server You can bring them up very very quickly and that you also it fits much more with this model of Microservices and disposable services and if we want to upgrade you just you just bring down one container quickly replace it with another You can do that en masse and whenever you've seen demonstrations of containers I think that you always impress they say okay Let me bring up here 400 containers and go zip zip zip zip zip zip zip and they're all being brought up That speed of it also means that we can do things such as upgrade much more readily And at the same time then we're seeing that the containers are coming out of a lot of the work We've seen in Docker and then container orchestration Things like a mace of sphere and we're seeing that in kubernetes So the entire tool chain around containers I think is being vested in very broadly and so the fundamental question is that then why don't We in the open-stack world start trying to take advantage of this technology? It seems like this is something that could be really used if you think of now the set of services that make up over open-stack Nova Neutron cinder and everything else those are services they can be made into containers and we can get all the benefits of that So the two big projects that that are in this area just again to level set everybody are magnum and cola Magnum allows you to run containers on top of a cluster of virtual machines So it makes it very easy to go essentially from running containers on Amazon or Google to running it on open-stack You can bring those containers and you can use magnum to create those environments such as docker swarm and kubernetes Cola looks at it the other way cola is actually now saying that containers are at the base level And we're taking the open-stack services and we're turning them into containers so that they can be running on top of some an Orchestration system either directly using something like ansible or using a real orchestration system as such as kubernetes And so that's why there's a two different models And so maybe we're you know and then on top of that again You could run containers because if you want the isolation you could still run containers and on top of VMs So you get the best of both worlds here And so I wanted to you know start to really have this kind of statement about it thinking that then the open-stack Platform becomes an application it's built and deployed as a set of containers So if in our software-defined data center, we're saying that we have bare metal We're increasingly looking at tools for how do you Orcestra abstract bare metal and orchestrate the bringing up of bare metal turning them into clusters That's a whole nother set of work around sort of metal as a service on top of that We see a container being the next layer of Infrastructure and now on top of that you could run open-stack In fact you can run multiple instances of open-stack And so that allows you then to sort of mix and match and you're giving the developers the best of both worlds So we in fact we're we are motivated to do this at Cisco and the Cisco use case when we are looking at NFVI NFVI is an open-stack Platform optimized for the running of virtualized network functions and we would be distributing this as software to run as a solution In a customer's data center one of the prime things we wanted to achieve was we wanted the Deployment of that to be the same and everybody's data center We wanted to encapsulate both the configuration and the services in these containers So we could put them everywhere and have them be the same and we could also then rapidly update them and upgrade them Simply by bringing in new containers and we could have a distribution model out of a docker registry So we can now distribute this this software open-stack software out of a docker registry Deploy it everywhere the same way and then upgrade very very readily on top of that NFVI supports in the applications such as network services mobility and media some of those are going to be in containers Some of those are going to be in virtual machines, so we need this uniform environment So it looks kind of complicated this but what this also wanted to show here just quickly is really that it's about the life cycle management And so with containers we have a clearer model about how we manage the life cycles of each one of these containers Independently upgrading different components wherever we need them so in the center of this is open-stack But as you can see around it you have all of the other elements You need in a modern environment a CICD deployment system all of the automation for this and then how do we do upgrades and updates? So these are in fact the number of services that have been containerized, so we've worked very closely We're gonna have Steve Dake up here talking about Kola Which was a project to containerize open-stack services And then these are all of the services that many of you may be familiar with and it's not just the sort of Top-level services, but other services that you can see and it's such as LogStash, MariaDB Other things that we are putting out there as containers so we can rapidly bring up this environment as a set of containers supporting open-stack So without a further ado, I wanted to bring up the real experts of people really doing the work behind this So first I wanted to bring up Steve Dake for Ptl for Kola So there's my Twitter handle if you want to Twitter Me I'll go ahead and get started So let's talk about the highlights a bit for Kola Newton as you can see our graph is growing in terms of adoption and There's kind of three different colors here. The dark green is deployment. So that's a 1% the kind of middle color green is people that are evaluating Kola and And and this is the user survey from a year ago and then six months ago and then today The user survey from six months ago. That was at 3%. It was a 0% when we started a year ago now It's a 3% This most recent user survey was at 4%. What's cool about this is if you notice here the kind of the last Green column is the interest level in Kola that hasn't changed Which means that our interest is actually increasing Well, this is how I interpret the data the interest is actually increasing But those those people that were interested before are actually now testing and evaluating Kola So if you add these numbers up, you've got one plus four plus eleven That's about sixteen percent of the operators that filled out the operator survey So I think the data set was something like two hundred sixty two people Or deployments That's pretty significant. Now again 1% deployment. That's not a lot But it shows that people are really interested in this model Let's talk about the highlights a little bit. So we can deploy Probably the biggest thing in my mind in this release is we can deploy from bare metal Using pixie so from pixie all the way to a deployed open stack we get From beginning to end and that's a couple of operations. You have to run a couple different commands One of them is a Kola bootstrap and the other ones are a Kola like ironic deploy and then Kola deploy So there's three operations. I think that actually result in the deployment of open stack from bare metal All the way up to a running open stack. It's pretty cool So we've had about 20 months of development on Kola the first five six months. We've floundered. So we worked on Kubernetes and so I submitted patches upstream to Kubernetes and they're like, well We're not really ready for those patches yet. We're trying to release one. Oh, and I said, okay We're just not gonna use Kubernetes. We're gonna do something else So we went we tried compose instead an idea behind compose was we wanted to do an all-in-one deployment Now once we got done with compose and we got that kind of compute kit working with compose We decided that wasn't really going to work for us either Because a single node and who wants a single node cluster except for developers They develop open stack. So we decided to go to Ansible and Once we got to that point Yeah, we were pretty we were pretty set I mean we pretty much were able to within a year get to the point where we have our adoption and our interest growth Kind of going through the roof in my opinion. I think it's a really strong One thing we did this cycle during milestone. It was kind of towards the end of milestone three And I want to send a shout out to the OSIC. They provide 130 nodes for us to test Kola A fella from I from Intel named Manjeet. I'm not sure that's his IRC nickname I'm not sure what his real name is He had given us 60 he had tested Kola on 64 nodes and found about three or four bugs And we fixed those bugs and actually when the first time we deployed Kola on the OSIC cluster with 120 nodes 100 of those nodes were bare metal were compute nodes 20 were storage and three were controller They worked right out of the box. So once but after we'd fixed the four bugs in Probably three months prior So that was really cool You know, it's it was very rewarding to see that that open stack could work in a container environment and for me I felt like Open stack that my mission on Kola on that part of Kola was kind of completed So Kola does full deploy upgrade reconfigure. So reconfigure is an idea of you've got this configuration that exists in the system You want to reconfigure it without having to log into 120 nodes and or 130 nodes or 500 nodes or whatever node count you have and It'll just reconfigure from one single source of configuration data and reconfigure your entire cloud So we have really high degree of security. I'm kind of a security nut So I think if it's not secure, you might as well not ship it. So you have TLS support if you if you miss my my My talk on our OSIC scale testing the TLS support kind of the impact was 300% to 30% depending on the load so The 300% was like banging keystone. Give me a token the 30% was more like Just to go ahead and Create center volumes. So Yeah, we have complete customization flexibility in Kola so we can customize any value We want inside of open stack. So any any configuration value you want you can customize The real advantage here is that you don't have to have this kind of development cycle where you say, okay now I've got to add this custom key Instead of doing adding the custom key You just set it and you deploy it and you you're not reliant on the community to validate Whether or not we really want that in our code base. It's all up to you You can read the upstream documentation and it'll override Kola will override those configuration settings Now I want to talk about a cold Kubernetes project, which is a new project We've been at it for about three months. It started six months ago But it took like two or three months to sort out what we wanted to do with it What I see this project doing is providing a converged data centers data center So a converged data center in my mind is you've got this kind of bottom layer and Lutoff spoke a bit about this So you've got this bottom layer of Kubernetes Let's say a thousand nodes and then you've got open stack over here Which is like 500 nodes and then you've got something else over here, which is all of your container system This is the future of the converged data center and this is really where cold Kubernetes comes in Now Let me move on to the next slide because that's the end of the material there. I want to talk about the architecture briefly So we've got some ansible code. This is just the ansible Kola ansible code base We've got some ansible code There's about a thousand tasks and the thousand tasks are Divided into about 50 roles if you know anything about ansible That's kind of a boatload of work to do that. We've got about 50 services a role as a service That's another way to think of it. We've got Docker containers. We've got about 150 Docker containers This kind of spans the entire gambit of things like elastic search We use something called hacker which is feeds elastic search with data. We've got MariaDB, of course, and then All of the whole bunch of big tent services. We also have some non big tent services We don't have a non big tent policy. So we take services that people submit them as long as they work We were really big on focusing on container services first. So like we've got Magnum and Courier, of course integrated into our system That's pretty cool. And then finally we've got our tools You probably can't really see this all that well, you know, you can't really see that all that well But there's we have 12 CLI commands the 12 CLI commands allow you to control your open stack system completely from beginning to end with just these 12 CLI commands so instead of operating a cloud by having to Staff up 30 or 40 people that understand how open stack works. You work with these 12 CLI commands And maybe need three or four operational people, you know working kind of in shifts to operate an open stack cloud so this slide here is our is our affiliation slide So Kola is diversely affiliated and the whole point of this is that if any of these so this bite this pie chart If you look at this, these are corporate contributors the corporate contributors are We could cut off four of these corporate contributors Kola would still survive So Kola isn't like dependent on one single mender for success if you look at kind of other deployment tools They are so this is like a huge advantage in my opinion the statistics on the right the the teal is the I'm not going to explain what each of the metrics are but I'll talk about what the numbers mean if If the teal is under 50% you're diverse according to the technical committee if the blue is under 80% You're diverse according to the technical committee. The green is how many contributors and reviewers we have sorry committers and reviewers we have you can You can look at this in more detail on stack elitics or you can look on the analytics website So finally the last thing I want to talk about is our repository split a lot of people in our community Not our community the open stack community want the containers to be a separate thing Now we could just you could just go take the repo and delete all the ansible code bits and And what not and then you'd have your answer your Docker containers We think the Docker containers offer a whole lot of value So we want those Docker containers to be usable and reusable by everybody. They have an API so what what we're doing between Newton and Okada is we're splitting the repo of Cola itself into the Docker containers and the ansible bits And then we've already got a cola Kubernetes repo, which is going to continue on so That's into my slides. I want to introduce Michael Schmidt now and he's got actually a great demo Which is fantastic about converge converge data center. So Michael All right right So, hello everyone, my name is Michael Schmidt I am working for SAP and I'm here to talk to you about SAP converge cloud Before we start a little bit of context So SAP is the biggest German software company We are selling business software for more than 40 years now And if you're doing like any business transaction nowadays, there's probably some piece of SAP system involved in it I spare you all the details With big companies that come big challenges and one of our problems is actually the fragmentation of our cloud landscape in-house Last time we counted we had 23 different properties And of course, we're not crazy. That was not planned. It's all due to acquisitions mergers and just The innovation innovation cycle spinning faster when we can move We have the classical cloud and on-premise problem So our software traditionally runs in the basement of the customers, but nowadays they want to have it in the cloud So we need to host their software or make our software being hostable in the cloud Our flagship product is SAP HANA. It's an in-memory database and it needs lots of memory think terabytes and That brings us to a bare metal use case we have It does run in VMs, but the t-shirt sizes only go so far And if you're really serious about it, you need to put it on real hardware And of course that needs to mix and match with VMs and containers as you do we have operations stretched across all those cloud property properties and We have a lot of processes and experience there, which we need to leverage also in the new world And of course it's 2016 if we're building up new things. It needs to be efficient The solution to these challenges is something we call the SAP converged cloud and it's actually a strategy and To sum it up in one sentence sentence. We're going to rebase our company on OpenStack and There's a few different initiatives working on this strategy and one of us is my team And our mission is to actually build up new data centers and put OpenStack on them so we're talking about 18 locations 18 DCs in 13 locations and We have mixed payload. We need to run we have KVM VM where and bare metal all needs to mix and match The OpenStack footprint that we're looking for is the usual suspects But also some not so usual components like Manila designate Barbican and also Monasca for monitoring We also have a bunch of our own services, which we have developed in-house Most notably the fire at the bottom. We have an automation service. We call ARC Think configuration management made easy for VMs and you can run chef and ends of it with a nice workflow around it We have HANA as a service which gives you HANA machines on bare metal spinning it up using ironic We have billing as a service. We have our own dashboard We call it ektra. It's a complete Reimplementation of horizon and we are actually using Ruby on rails. So we're already breaking with the Python dogma and of course we need to have Kubernetes as a service So how we're doing this? I think we have actually quite an interesting stack We're also riding the Chiffy cloud as you've seen in Austin from Alex Povey So we have open stack on top of Kubernetes and we are deploying Kubernetes in chorus and We're putting these chorus nodes on Cisco UCS plates So right on bare metal and we have a bit of machinery to build this whole thing up from nothing So our data centers are completely bare. They're just wrecked and stacked and then we're going to put the software on top so the left part of this picture is the control plane and The second interesting property of our stack is that we have a Separation between control and data plane so we see open stack as a big remote control, which is orchestrating the hardware behind it So in our case, we're using Nova to drive. We mware KVM and also bare metal nodes all hosted on Cisco UCS We're using neutron Controlling Cisco's ACI to give us software defined networking Cinder and Manila are backed with net app and of course load balancing as a service. We have a five So all our payload is running in hardware and the software side the control plane is open stack Running completely separated of it So what I want to do now is I want to jump in each of these layers and give you more details what it looks like So we start with open stack. We are running it completely containerized 100% And it's all color containers We have our own CICD system to build those containers And a bit of tooling to put our configuration into it more easily. We're not using any of the Ansible stuff We think that's the job of Kubernetes The best containers don't help you much if you can't deploy them to your Kubernetes And until recently there hasn't been really a best practice out of the Kubernetes community on how to do this properly So naturally everyone went off and built their own tooling including us we actually build it four times and threw it away again and But recently the Kubernetes community is coming up with this blessed way of deploying applications And it's called Helm and in easy terms. It's just a package manager for Kubernetes applications So we re-based our stuff on Helm. It took us two weeks And we think it's the way forward So we've been running around and talking about Helm this whole conference trying to make this a reality it's not perfect yet and it needs a bit of work, but I think it's the right time to do this and Jump on the right horse so that we can take the best of both communities actually The biggest driver for us in choosing this is that it's not only deploying open stack But also all other Kubernetes components. We're running additional software next to our open stacks to keeping it operated like monitoring systems There's Prometheus instances exception tracking We have this whole suite of applications which we need to also deploy and we need to have a pattern for our teams To know how to deploy those things and that's what Kubernetes Helm is giving us We previously tried to stick it all into our open stack build up scripts, but that was Just a bit too ugly to be honest It's open source and you can find it at the link which you can't see It's on github and the organization is called sub CC Maybe I can put this up later So on the next layer we have Kubernetes and I'm going to spare you the marketing pitch You can there was better better presentations about it The main thing about it for us is that it gives us an abstraction on top of the bare metal blades And an abstraction of our data center and for developing the application It doesn't matter if it's a mini cube running on your laptop or if it's actually a real data center The setup we are using it's actually We have a VM like workflow to install it. So if you install it on GZP or an Amazon you're using the API to spin it up and we want to do the same with our bare metal blades and We're using IPMI and IPXE to achieve that effect For each of our data centers We have a declarative definition what it looks like with all the IP addresses in it and the Mac addresses and all the layout of the Machines and stuff and we use that to drive an automation layer To set up all the infrastructure that we need to build this up and Then it's pipeline driven using a CI CD system and we can just click and install a plate With a click of a button Interesting thing is always how you do the networking Kubernetes test some specific requirements Which are especially on bare metal not that easy to solve and there's a various vendors are setting solutions For our use case we found that it's all way too heavyweight and we just stick some BGP routes into bird and We're talking directly with ACI to do the networking here and recently you can't see is on the slide, but we also Going into a more intelligent way of doing this we have a Kubernetes controller which actually listens to the API and Trives the ACI from there So it's much more Knowledgeable about the state of our services and if something becomes unhealthy. It's just going to pull the rods out Going further down to the hardware. We are shipping pre-manufactured pots as we call it I guess our hardware architects also want to play with pots The point about it is that it's a well-defined bill of materials and we pre-manufactured those racks we ship it off into our data centers And then we just have remote hands putting in the cables that then our team comes in and puts the software on top It's also how we scale this out not only for the control plane but also for our hypervisors which are running on these pots as well I want to come again to the topic of the split of the control plane and the data plane The main thing this gives us is that we have independent SLOs between control and data So we don't care that much that our open stack is HAA and perfectly available all the time Because when it's down, it's just the API and our customer payload keeps on running and dedicated hardware Also, that's a sleep at night because if something is wrong with our hypervisors. It's another team being responsible for it It also allows us to make much easier Upgrades of open stack because our SLAs are not that tight for it And it also allows us to keep our setup much simpler because we don't need all that HAA stuff We are not using any HAA for our databases yet. And so far it's okay We see what the next escalation brings I actually have a small demo now for you It's five minutes and I'm going to show you how this whole stuff is built up and what it looks like from the top If I can switch to the video If it lets me Hello. Ah, here we go So we're going to build up the complete data center from nothing It's a console here and you probably can't read it, but that's not that important. We're going to see some new eyes now We're building up now first the configuration for the tooling and we're using Concurse CI which comes out of the Cloud Foundry ecosystem. We ported it to Kubernetes and So this is building up the the tooling around it We're using a something we call the boot config server which gives us templated ipxce functionality and then we just have another pipeline which installs the control plane using ipmi and ipxce We're going to jump into one of those nodes and you can see what's going on. So we're booting cores Using ipxce and then we're using cores to actually install cores on the machine On first boot there's a mechanism in cores It's called ignition and it just runs on the very first boot and we use that to put our software on top of it Which is all the Kubernetes stuff. So In the end it's a few certificates and a few binaries. It's actually not that complicated to install it Then we have a running Kubernetes and what you see here is the dashboard the UI of Kubernetes The next step then is actually how do we get our open stack on and so for the purpose of this demo We're doing this from the command line You can see here what our layout of nodes looks like. We have three masters and a few farm nodes We call it. That's the system components and now we're going to put the open stack on top And it's actually just a script We're now doing this with Helm and it's dropping the specs into the Kubernetes cluster Actually takes not much longer than here like 30 seconds to dump it all up and then it's gonna Fresh around a bit and eventually it comes up And what we see here is our custom horizon replacement, which we call Electra It gives us like convenient onboarding mechanism for our existing customers We have extended RBAC controls in it and almost UIs for everything you have in horizon including our own services So we're creating a project here It's all backed by Keystone, of course And we're spinning up a machine and then I'm going to show you what this control plan split does for us we're gonna do the usual ping demo and Then we're gonna shut down Neutron and you will see that the VM keeps on running An interesting thing about this dashboard is that we have Web CLI build-in, which you see here With is backed by Kubernetes as well So you get your own part where you have root privileges and it gives you a pre-authenticated client Already installed so you can just remote control your open stack in this context of the of that project right from your browser So here we are now in our VM and it's happily pinging in the middle split You're gonna see the neutron components and in the lowest screen. We're gonna shut down neutral now We have Monitoring facility around it. It's backed with Prometheus and we have custom middle-wares in our open stack to give us Prometheus metrics and Here we are now deleting neutron and the interesting part is happening on the top You see that the VM is just happily pinging SAP.com while in the middle Neutron is gonna go to error state now and You can also see that it's down on our dashboard here If we wait a little bit, it's coming back up again and This is basically the demo of our control plane split our customer payload just keeps on running even though We could delete the whole open stack and it would still keep on running and with that I'm also gonna skip the next slide because we talked about it. I would like to thank you We are SAP deploying open stack on Kubernetes Thanks, I think we've got a couple of minutes. Yeah, we do have some time for a couple of questions. So if there are Any questions? Can you go over a little bit more about how you controlling the ACI for the route injection for the Bay Middle Services? I'm not the specialist Test test. So I'm not the specialist in it, but we are using neutron and we have Custom drivers. We are developing in collaboration with Cisco The pattern we are using is called hierarchical port binding It's due to our requirements to have more than 4k networks per region And so we just have a custom driver for Neutron, which is remote-controlled in the ACI I have our experts actually here if you want to talk to him. He's sitting right there Okay, you're gonna make it difficult Hello Michael, you mentioned that you were running Chorus and you're booting it from IPXE and then you install using Chorus to install IPXE on the hard drive Why are you not running it straight from IPXE? Why does it need to be installed in the hard drive? I? Guess that's to get rid of the dependency of actually having the IPXE infrastructure available all the time and it's just like I Don't know maybe the more traditional way of doing things just installing everything But it's definitely possible and we thought about it to just not install anything on disk But there's also a few things which are persistent like the atcd store So it's not completely ephemeral what's happening, but almost we could do it with IPXE Okay, I had one more question Which I forgot so You can find me later. I'll talk to you later Any other okay, you are really challenging me today Well, we're waiting for the mic How many here are running opens back on containers we're trying to actually understand How widespread this is One two You guys except for SAP at Cisco Hi, so you spoke quite a bit about the compute layers I'm curious what kind of analysis you did on the storage layers and kind of the thought process and trade-offs that you looked at as You built the architecture This is a question to me So the storage is also outsourced. We're using that up to try this so initially we thought about at actually running stuff inside of the containers as well and Yeah, our prototypes didn't go so well, so we didn't decide for it In terms of storage may be interesting as well something I didn't show is that we're also running Swift and We are actually provisioning the Swift nodes with all the hard disks using Kubernetes as well and we have some custom tooling like to prepare the hard disks and it's like We are making open stack a bit aware of the Kubernetes infrastructure. So there's a demon set Which is watching for new Swift nodes and when we spin up a new one It's going to find all the hard disks and it's going to format them using the Kubernetes API more or less Yeah, as far as color goes we use stuff today. So I think NetApp is hard for for community developers to work with so color is a community effort And then what SAP is working on is a product, right? So, you know what the like the repo split upstream versus a downstream Cisco is upstream versus a downstream So that's one thing to keep in mind not everything in Kola or Kola Kubernetes will be used in Cisco products or SAP products or a combo thereof I guess you also have to differentiate between what we're using for persistence in Kubernetes and what we're using for our payload And so in Kubernetes we are backing it with NFS stores and we have quite some trouble with it But we also think that there is no really good solution yet for the whole persistence topic on the control plane Maybe time for one more. Oh Okay, he remembered his question You're gonna have to meet me halfway this time You're right at speed Gary. Yeah, I remember my question for the gentleman on the left I wanted to know what's the difference between Staconators on the chorus project and Kola where the overlaps and one of the differences Yeah, so that's a it's a complex question, but I'll answer it is simply as I can And in terms of the containers stack in Eddie's uses Kola containers. That's why we're splitting out the containers You know, if you look at this summit, I think there's been like 10 different companies or 15 different companies have said We're doing open stack in containers So we want Kola containers to be the standard for that so that's that's standardized and I think that will remain the case except for For folks that need to have some kind of application that doesn't fit into our repos But we take will we really will take any container that somebody wants to submit so very open about that As far as the stack in Eddie's effort itself in terms of its orchestration engine it uses Something called Kola mesos, which was a project before Kola Kubernetes that was mostly developed by Marantis and We found that that didn't really work The messos implementation didn't work very well but They've used it to they've taken the mesos part out and put in Kubernetes and then left the kind of the rest of the Infrastructure that was developed. It was really quite the good work I wish Marantis hadn't given up on it because I think we'd be a lot further ahead And I think product companies would be a lot further ahead as well I think it's kind of interesting that we're looking at Kola as being I think this kind of foundational Place to bring together these different components That may be as we go forward and learn more about it and learn what actual deployments They're needing get sort of reconfigured and everything else so that we can actually continue to try to add different Options into into that environment and all learn as we do this I think that nobody is a pure Kola implementation today I think you are actually Oracle. I think it's pretty close But I think that we're seeing a lot of different variations here But I consider that still a part of Kola because we're trying to make it so that things like the registry and everything else can really be fundamental to this And especially the containers that's critical For that for that combo there I'd like to see us do the testing of those containers by a large number of people in different deployments Yeah, that's what would really be at five value here. Yeah, very okay. I think we're probably getting the boot here We're getting that we're getting the the hook signal here. So thanks again to Lou to Mike to Steven Thank you all for coming and we hope to see you all again in Boston