 All right. Good morning everybody. Can you guys hear me? All right, so luckily not everybody went out drinking last night You're saving yourself for the red hat party tonight, right? So Awesome, so thanks for joining us My name is James Lobaki. I work in the the infrastructure group at red hat This is Brent Holden. He's the chief field architect of the east in North America At Red Hat as well. And so several months ago We decided when we saw that open-stack summit was in Paris We figured how in the world could get our abstract accepted and so we decided to put Docker and Kubernetes and atomic in it naturally, right? No, in all reality, we're gonna be talking to you about kind of our experience in trying to Run all the dockers the open-stack services in Docker Kubernetes and atomic today Next slide here Okay, so quick agenda we've got a short amount of time So first we're gonna start by outlining what the what the problem that we saw was with with open-stack services today We're gonna talk about some of the current solutions that are out there and available And then we're gonna dive into some of the improvements We think that Docker Kubernetes and atomic could provide and then finally we're gonna end with a demonstration a live demo No less, right? So all right, so really important I think is is defining the problem You know we a lot of the sessions I come to or a lot of the design summit sessions It's very easy to just gloss over the problem and not ever you know If you don't define the problem correctly, you're never gonna know what you're actually need to solve, right? So Important to establish what the problem is so you know when you look at open-stack It's really a thing of beauty isn't it we come here. We look at all the all the architecture We say wow, it's beautiful. It's block storage. I could deploy it. It stands alone It's completely, you know discreet and atomic and everything works well And you have your beautiful private cloud, but the reality is actually a little bit different What what you find is when you deploy open-stack I mean how many people love deploying open-stack and managing the services life cycle services you love it All right, we found the consultant in the room Okay, so so the problem is that the open-stack services they rely on one another heavily, right? And the dependencies are complex so even within a single service There's complex dependencies and then the dependencies between the different Services with an open-stack make it even more complex You know so example is if I want to update Keystone all my other services are affected by it And there's a lot of you know magic that needs to happen today in order to make that happen So we want to do this much more much more easily, but how do we do that? And the reality is that you know and this is not to knock open-stack right This is not the open-stack is not a beautiful and unique snowflake Every infrastructure platform has the same problem. So in fact At red hat, you know, we've seen this across a lot of other things as well So whether it's our own, you know open-source virtualization platforms our configuration management systems that we use content delivery Even you know storage we see these same problems showing up Which is basically that any infrastructure services you're running that have dependencies on other infrastructure services end up becoming kind of A nightmare to manage and understand how you update the lifecycle of it And and that's just the operation side the developer actually has a bunch of problems too because you know developers basically want a reproducible environment They want to be able to know that They can get their environment the way they want it and then when they give it to you it works exactly the same They want a separation between the operating system in the application They don't want to have to know that they have to run a particular operating system When they deliver their code to you for this to operate And the third piece is that they they? They don't want to actually deliver you a Massive manual with their with their code for you to be able to get it running, right? They just want to be able to drop you something that you could very easily take deploy and know it's going to work So real quick. Let's look at what the current solutions are for lifecycle management And again, this is not to not to knock these because a lot of them work really well But the question is like what what are their deficiencies today or where are they deficient? So, you know one of the one of the ways you can install open-stack today is using like a build base base system with configuration management and So on the left-hand side here You see here's my build base, you know very high-level, you know build-based system some sort of lifecycle management tools with either using a workflow Or a state machine with some or some decision engine tied together with a Repository of content that you could pull from to do your builds and then some sort of configuration management That's going to go modify the system once it's deployed, right? So there's really three main deficiencies that we've we've seen with this the first one is that it's it's inefficient so When you deploy genuinely generally with a build-based system You're going to separate your open-stack services on a different operating systems And if you're doing this you're typically doing it on bare metal right because you don't want to deal with with virtual machines Or if you do virtual machines get pretty messy So what happens here is open-stack service a might only be using you know, 30% of its CPU, right? So it's kind of inefficient to put an entire service on there. What if that's just keystone CPU loads not hired It's not using a lot of memory. So you don't want to do that The second one is is really that it becomes difficult So if you do want to stack two open-stack services in the same operating system You have to know the dependencies between them. So for example if I put, you know Keystone on a server and then I put, you know the glance API there, maybe they're gonna use the same port Who knows right? I mean both both these projects are moving at speed people are making changes There's a chance for a conflict also. You don't know what kind of quality of service you're gonna get from that, right? So one might impact the other and Then finally the third one is that it's just kind of a slow deployment, right? You're pulling packages across so at scale you don't have to be pulling, you know each package one by one, right? so pretty pretty pretty It's a good good solution But there's those are the three deficiencies of kind of the build based with configuration management approach The second is is image-based, right? So you see these image-based deployment solutions Again a good solution, but kind of two areas that they struggle in even if you're using image-based with sort of a declarative model The two areas is that it's still difficult Because at the end of the day you're gonna have some sort of image builder That's building this image and when you're assembling that image You're gonna have to figure out how the open-stack services get onto that image And you have you basically face the same problem earlier on in your life cycle of building that image So you have to figure out can I layer a keystone onto here? Can I put this API in here? Can I move this? You know how do these two Services relate and how are they configured on that image? So really that becomes you know kind of the the same problem moved into an earlier stage and Then the second problem is that the updates are actually expensive So if I have to push an entire image down to that system every time I want to update That's kind of an expensive process. So I don't have to do that each time Alright So with that Let's talk about a little bit of kind of tomorrow's improvements So what if we were actually able to have a solution that was kind of isolated lightweight and portable? And pre-integrated so that the developer could actually do all the work and then deliver it to the operator Right through some means and then the operator can deploy it Also if we could easily describe the runtime relationships instead of me giving you you know Thousands of lines of your favorite configuration management solution What if I could just very easily give you a declarative way that you could run this with kind of scheduling included and all that And then third what if I could run on something that was thin and easy to update so I didn't have this You know giant operating system underneath that you know took up a lot of space and had its own Own challenges in managing so for the first for the first part for isolation lightweight and portable and pre-integration That's kind of what the what docker is meant to solve right so Quick show hands. How many of you have not heard of docker? Okay, even you so one person likes installing open stack and managing the life cycle and Nobody's not heard of docker. All right, so Easily describe runtime relationships so Kubernetes allows you to easily describe runtime relationships and it also allows you to schedule containers Across different systems. How many of you have not heard of kubernetes? Okay, so a few all right and then Atomic is something is something that is very a thin operating system optimized for containers with just what you need to run for containers And it's easy to update as well. How many of you have not heard of atomic? All right, this is like a pyramid right all right, so the goal is really to develop locally And run in production with less friction so developer develops pushes it over and you can run in production with a lot less friction So real quick on docker. I'm not gonna since basically nobody raised your hand don't have to go into a lot of detail on docker I hope on the introduction side Certainly, you know Linux container technology provides an API on top of Linux containers Allows for a relationship between parent and child images and things like that The key is that developers don't want to ship virtual machines to ops because they're heavyweight, right? You don't want to ship a whole VM The metadata getting metadata out of VMs is expensive You need to either keep some sort of agent on that VM or do introspection which can get you know very difficult So they don't want to do that. They also want to own the integration So they want to make sure that the image they give you is already ready to go So you could take it run it and it's pre-integrated so I don't have to worry about runtime configuration as much And the third is that? They don't want to learn your packaging format, right? They don't want to learn RPM. They don't want to learn Deb They don't want to learn MSI. They just want to ship you their image So that's the benefit that docker brings With kubernetes and container scheduling So this is just a quick overview that the kind of the kubernetes architecture So if there's a master minion concept so the minions are what actually will run your run containers Will pull down docker images and run containers and the master is what does all this all the scheduling There's a couple of services in here that are important. You got the proxy service which essentially Allows people to see your services externally and then that actually plums down to the pods And so the pod is actually a collection of containers running on the minion that all share a single network namespace and so There's a bunch of other services We won't we won't get into in this but you control it using the the kubelet CTL commands down to the master and then that will Allow that will then orchestrate the deployment of the pods and the services so services and Brent will get in this a little bit more in the demonstration, but Services are what allow you to basically publish your your containers to the outside world through a proxy service and Pods are actually what allow you to define the deployment of the containers The other thing that kubernetes provides is a declarative syntax So you know instead of you know service Mongo, you know MongoD start and go through all your steps You know declarative allows you to basically say this is what I want the system to look like Or what I want it to look like when I deploy it and then kubernetes will will figure out how to deploy that and You know, I think that the saying that that I've heard from you know, the Google side who where they launch, you know Millions of containers. I think is declarative always Trump's imperative. So I want to stick with a declarative syntax and then Finally we have atomic so atomic is really it's a it's a thin operating system That's optimized for containers. So includes Docker and Kubernetes and at CD to allow you to run that it doesn't include a lot else It's very thin and light and it also allows you to update using new mechanisms like OS tree So you could rebase the OS very quickly You know upgrades happen very fast. So those are kind of the three an overview of the the three technologies there All right. So we want to go through how this changes your life I'm gonna take over or I'm gonna dive through here. Okay, so the so the From a developer standpoint, right? The goal here is that the developer whether they're running on, you know Linux or another operating system as their base system. They could very easily develop using their choice, right? So they could use their choice of hardware to develop Maybe they're spinning up a Linux box on vagrant in this example doesn't necessarily have to be on vagrant They can be using bare metal and then they're they're just going out and using source control So they're using puppeter chef to essentially build their images They could use whatever whatever configuration management language they want Of course, there could be in between them and get there could be a whole number of continuous integration tools whether that's you know, the native tools that OpenStack provides or their own, you know kind of CI CD tooling from another You know platform as a service vendor such as OpenShift They could use that so the great thing here is that the developer Gets easy access to their environment. They're able to very quickly deploy Deploy their environment, and they're very easily able to develop, right? Then when they're done with their development on their choice of you know, their choice of operating system their choice of hardware They could then publish so really the line of demarcation between developer and operations is the Docker registry so So what would happen is the developers done with their changes They would then push this up to the registry and then on the other side of the In the operations side on the right hand side whether that's test UAT production that could then be deployed via Kubernetes and so As we said before those those life cycle management solutions that build based with configuration management and the image based with the declarative model The goal here is that there's and we haven't really figured this out, right? I probably should have said this in the beginning. This is all completely experimental, right? But on the right hand side On the right hand side there there's got to be some relationship between Kubernetes and the deployment tools that exist today, right? So whether that's you know, whether you're using form in or you're using, you know fuel ever using triple low or whatever it happens It'd be right or stack IQ there should be some relationship between Kubernetes and the life cycle management tools that exist today Is what we're what we're trying to get at So yeah, so the developer once he launches it Kubernetes actually will take care of the deployment of the open stack services in the Docker containers, and then you have your you know working open stack environment there. All right So Demo time you guys excited to see the demo. All right. Ah, awesome. All right So we're gonna make the mistake of doing a live demo today Yeah, let me get out of your way here All right, so I'm already on a corporate VPN so may or not. Can you guys see that in the back all right? Yeah, it's okay. Everyone can see it. Yeah, it'd be a shame if I did all this work. You guys can see it So well, I guess first things first so first is Let's see. We want to show off an environment that we built this in so To build this in an environment. I happen to run an open stack environment internally One thing that we didn't really talk about so far is how You know where the demo comes from and where we get all these bits from so the the demo itself comes from an Upstream project called project Kala. I'm sure some of you guys have heard that this week Project Kala is basically how the triple O Project has created a sub team to then investigate containers in a way to then provision open stack services So a lot of the stuff that I'm doing here in the demo today In fact, all the stuff I'm doing in the demo today is replicated easily all the documentation is upstream and we can Shoot you the URLs at the end of the presentation So the idea being that even though I'm demoing on open sec today today You know, it's just an easy way for me to get a VM so I can show you This is not the ideal use case. Most people do not do not want to provision open stack Within an open stack tenant environment. This is not a typical use case so I'm going to log into My open-stack environment just to get access to my VMs and I'll show you where those VMs are coming from some configuration parameters Okay, so I have a tenant I set up for Kala. So we're gonna move over to that It's a little slow. All right. Here we go all right, so we run a Environment very similar to tri-stack. So for all the folks in the room that are familiar with that it requires a tenant router with associated tenant networking that set up I have a very simple master and minion setup and You know, that's just they've it's meant to be a demo So ideally with Kubernetes you'd have multiple minions that you'd be able to deploy to the Kubernetes schedule is not particularly intelligent as of right now, it's just designed to basically do a round robin between your minions when it's scheduling pods and I'll go into What pods are what that is and what it does just a minute But just know that further right now. I have a master and minion minion is where all my Docker I services are going to land Okay, so let's see. Let's go over to my instances I've got some IPs here that I'll share. So my master is on 1195.6 and I've got 95.7, which is my minion And just to while he's a staging in there two things So, you know, we're doing this on top of OpenStack Certainly you could just run this on a standalone workstation too if you wanted to get all the OpenStack services on like, you know, Fedora box or otherwise I Know you guys have text line demos. This is a OpenStack famous. I guess okay, and then Okay, so I'm gonna log into my master and minion just to demonstrate what's going on. So I think with My master what I'm gonna do is Have caps lock on this command will not work Okay, so I'm not running anything right now and that's kind of the point of atomic is that it's a platform For you to land containers on so the idea is that right now this this machine is a complete blank slate It's got right now. It's running a slightly modified image of Fedora But this also works on on atomic as well either Fedora atomic or centos atomic for those of you who want to replicate that Okay, so on my master I have Kubernetes installed So what I'm gonna do is do kubectl list Minions and so you can see that I only has one minion registered And so what I'm going to do is then kick off the process to then provision that like I mentioned the All the stuff that we're doing is based on the project Kala. That's upstream This is a git repo and Stackforge the Kala repo. Okay, so you can get clones repo. It has Actually a really excellent reference architecture for Kubernetes So for those of you who want to dive in learn more about it learn more about how services are constructed and how pods are constructed This is actually a great way to to dive in and learn and to look at how someone else has done it at least it was for me So what I'm gonna do here is let's see tools. I'm gonna kick off the process Which isn't gonna make a lot of sense right now so you can see that you know It's executed a bunch of commands on the master and then I'm gonna go back to my Gonna go back to my minion you can see now that it's starting to do things right So Kubernetes is setting up these different actions, and so now I'm gonna flip back to the presentation Describe what those actions are and then after ideally three minutes this thing be ready to go We can show how open stack is functional Yeah, and while he's switching back and forth just want to Throw out to a couple people so so you know if you got questions about triple O and that side of red hat Keith Basel's on the front Keith turn around wave If you got questions about kind of foreman Arthur Bear jins over there He's from red hat as well Hugh Brock is in the building. There's a hand over there. He didn't get in the door So he so we found out he somebody once somebody did go out drinking last night But and then I'm trying to think who else is in the room. I see but yeah, definitely reach out to those folks Yeah, absolutely. Okay, so what we did here was we executed. I just did this the tool start what that did was Normally in the newer versions of Kubernetes and Kubernetes is a rapidly changing project I use kube config but kube CTL is the the new command This start command executed kube CTL that's going to communicate with the master and tell my API server Hey, I want to execute these actions. These are the different pods and services. I have to find That start command I executed that actually executes three different commands One is to create pods and here. Let me flip back to my master. We'll talk about that I mentioned how this is a great way to Learn kubernetes So what else is in here? So I've got pods replications services. So You know, I'll go into the tools. I'll show you Okay, so it starts all services it starts with the replications and starts all pods. So if I look at what the Start all pods is doing So right right now. It's pretty simple, right? The star command wraps all these three services replications and pods and then it basically those call the kubernetes and Exactly. Yeah, so project calls it stands today It's not a complete implementation of the open stack services right now We only deploy rabbit and mariah is the foundational services. That's absolutely required for open stack But we're only really deploying keystone glance a couple of Nova So Nova controller or the neutron controller and heat Nova controller is interesting because What kubernetes does is it allows you an easy way to break down these services to Nova controller isn't necessarily Just one container that's running all these Nova processes It's actually multiple containers that are going to get spun up that then expose their own Expose their own binaries that run their own services. So Nova API Nova conductor Nova scheduler good examples of that same thing with glance for example glance can be deconstructed to glance API and the The glance registry. Yeah, so each one of those is a discreet docker image And then the kubernetes pod construct basically plums them together Through local host on the same host So basically you'll see that there's a bunch of paused containers on there and those will get plumbed together through there Okay, so now that I have my now that I have my master has been told any sex cute things It's gonna look through its list of minions the scheduler now is like I said Relatively dumb it just does around Robin now, but the idea is eventually that if you have multiple minions Let's say at the level of I don't know thousands hundreds of thousands. I don't know that scale limit is But you should ideally plug it into another type of infrastructure that knows the topology And that's where the mesos and kubernetes integration really starts and ends so So for now, we just have that single minion So it's gonna tell that single minion communicates the cube proxy and says hey I want you to Start up these these different containers. I want you to start up see advisor to then do the monitoring and You know, we'll see how it goes And so those docker images that it's gonna get from all those are located on the the upstream docker hub Repository for today's demo purposes. I've pre-cached those using a simple pre-pop script So let's go back. I think Let's see how our containers are doing Yeah, so I think at this point You know should be done. I just came up about a minute ago. Yeah, so okay, so in the time We've been talking open stack has been deployed doesn't look obvious, but it is running you can see all these All of my containers here. I can see that you have Nova scheduler running Heat API or do a glance API? Nova API and so on you see these weird you might see some weird containers on list too I'm not sure if you see them here There are these pause containers that Kubernetes will spin up I'll go into why that is in just a minute but the idea being that Kubernetes sets up these Network namespaces like James mentioned that's shared between the different pods and that's important to solving some of our problems in the future Okay, so for right now, let's find the keystone Okay, so I'm going to see don't duck our ps grab keystone Okay, so here's my I'm gonna do So I'm going to enter the container so Brent actually has his monitor set up like this at home So he's works like this all day. It's very convenient for practice is very convenient Okay, so so by default we set up some default admin credentials So we set up admin default password of Kala in the admin tenant. So that's what I'm going to use So I'm going to source this that's not going to work Okay, so I'm going to perform a keystone user list As you can see that yeah, it's pretty cool, right? Okay, so keystone works, that's great, right, but keystone by itself as pretty setup Let's see how glance works. I'm gonna break out of the keystone container. I'm gonna do the same thing basically for Glance API So I'm gonna enter this container Okay, so I'm in the I'm in the glance container I'm sure you guys can see that I have the same credentials. I'm going to use some thing Actually in this case, it's a glance and password for the admin tenant. Okay, so I'm gonna source that Clear screen. I think there might be some Sit down the bottom. Oh Okay, so, uh, okay, so I can do file that file that RFP Yeah, I'll get right on that. Okay, so I can see there's no images in glance right now because it's just spun up I'm gonna cheat for just a second And copy and a command that way it'll make this go a little smoother. Okay, so I'm basically going to retrieve my image Yeah So what I do now is I'm going to so I'm grabbing the CRS image and I just punch pump that into glance Okay, so now I have a CRS image. That's that's uploaded. So if I do a glance image list Okay, so I can see that zero zero three dot three is there if I because I'm in the glance container If I go into var lib Glance images where you would expect for a local file system now I can see that same you ideas present both. That's where my image landed. So it did communicate to the container Even though it's going to itself effectively because I'm uploading it from the glance container That's how it would work from any other machine So that's really it as far as far as open stack So now I have a fully so my glance container is capable to communicate to Keystone Keystone's running everything is able to communicate to rabbit because the glance API communicates to the glance registry via the via the aimqp message bus and so That's how it worked So let's see what else do we want to talk about? Yeah, we got some Towards the end of it some of the some of the some of the challenges and moving forward and all that as well Okay, okay first just let me okay, so we have a working open-sack installation and that's great I love working open-sack, but you know at the end This is not a good deployment model or maybe I want to take this machine and Rebase it to just the blank slate it started with so doing that type of action with puppet is very painful As many you aware you might as well nuke the machine from orbit start fresh re-kick the machine But in this case what I can do is I can do my I'm gonna do tools Stop What this is doing is it's telling the master I want you to stop everything you're doing all the services all the all the different actions you're performing and On this guy on my minion I can say Okay, so I can see that those Those containers are starting to spin down so they're not gonna exist in a few seconds So this machine is going to be in exactly the same state it started At the very beginning of the presentation, so that's pretty cool that way if I if I did want to do something else like I wanted to You know deprovision a certain pod of services and reprovision them on to another one That means that they get deprovisioned here and they'll get reprovisioned somewhere else It's a very powerful message of containers where doing that in the the puppet base world It's a very complicated and difficult action or if it's and if it's image-based. It's very very expensive to push that new image down absolutely, okay, so let's go back to Back to the presentation. We're very brave for using Google. I know straight to Google. We trust the internet here So some of the challenges that we encountered While creating this project and keep in mind this project is only it's only been around for roughly a month Less than a month and that's all the work that we've accomplished to be able to get a fact if you've contributed to call us up I don't know if any of the guys are in here today. Can you raise your hand? I mean they might be in the hall Oh, there's large marshes there. So yeah, that's a couple guys, you know one two weeks sprint This is like milestone one. So just to get things running, right? So it's amazing how far this project has come in a very short period of time Not only getting the stuff dockerized, which I think is probably one of the easier parts about it But then also getting the all the kubernetes services described and getting that tested kubernetes is very rapidly evolving So even differences between, you know, zero dot three and zero four You'll notice that it was coop config in our diagrams and it's changed to coop CTL since So some of the some of the major challenges that we ran into we're around external connectivity So we didn't demo all the services that we provisioned were services that are more or less outside of the data path, maybe with the exception of glance you have Keystone of a scheduler Nova API was aren't necessarily within the data path The things that are within the data path are very complicated. So things like the L2 agent L3 agent Nova compute Those types of services you can provision them via containers although it does have its own set of complications and During the design summit yesterday yesterday afternoon. It's really the debate of you know, is it's even a problem worth tackling? So we think it is I think it's a great idea, but it has its own rats nest of issues multi-host networking same types of problems where You know, even within the even within kubernetes when you start spinning up multiple millions They need a way for these atomic hosts to communicate to each other In this demo, you couldn't see it because we're not using it large created this tool called link manager There's all the tools out there called like weave for example People are trying to solve that problem of how Basically build over lane networks between exactly between the hosts, right? The privilege container model so kubernetes does have an understanding of privilege containers, although It's got some limitations now. I'll keep it simple The runtime configuration so certainly, you know a comment I hear all the time from my customers is that they want to get those open tech services outside of the data path They want to hook them into maybe it's an SDN. Maybe they won't look glance into s3 you know problems like that and That type of runtime configuration doesn't exist yet and call it today and even just to quickly expand upon that as well all that Basically what was happening is when those Docker images were starting They were pulling environment environment variables that kubernetes was passing into them via at CD right so all of that that entire area needs to be kind of more deeply investigated it's the best way to do that because we don't You know, we want to work within the open-stat communities construct not necessarily impose anything on it as well. Yeah, definitely Certainly persistent storage is a big issue just using Container volumes does not solve the problem of persistent storage persistent storage does not solved by a volume a volume is just meant to separate the running container from that particular set of data Then the idea with volumes is that you have multiple containers that are looking at that same piece of data And the volume goes away when the last container goes away So that doesn't necessarily solve the persistent storage problem Chances are if you're deprovisioning let let's say my sequel is under load You want to provision it somewhere else chances are you don't want all your data to go away as well? You know that that would be a major problem The monitoring and logging piece is also something that I think is in general a major operator problem that most Open-stat operators face We use see advisor for the basic The basic container functions and that's built into kubernetes how it will do that but as far as You know using containers for things like supervisor D. Supervisor D is great You can start at multiple binaries within a container But as far as monitoring goes you only know if supervisor D is running you don't actually know if your binary is running so Even at the container level there are monitoring issues and there's also the typical open-stat service availability Monitoring issues as well. It's also early days for kubernetes. I can't really stress that enough kubernetes is very rapidly changing It's a lot of new code that's being dropped I mean we'll be in IRC and then we'll say hey, I committed this you know Get commit today that that fixes particular bug. So I bet that's kind of where we're at right now Yeah, so I think kubernetes has a very bright future although it's still very early days And I think it's very representative of yeah, how far it's come. It was only released. Was it the summer? Yeah, right earlier this summer. So Okay, I mentioned all the all the stuff we're doing is all upstream It's important for red hat to do things in an upstream transparent way as much as we can So I mentioned earlier that it's project Kala. Here's the blueprints for project Kala You'll see all the different things that we're working on project atomic The stuff that Lars has written for heat kubernetes that makes it easy to provision those types of images like I did in the demo today the kubernetes stuff upstream is hosted on Google Cloud platform and Docker IO they have their own Yeah, so the design summit session we were gonna invite you to it until we realized it was before this presentation So just go read the notes instead And definitely jump on IRC in the mailing list Okay, so I just want to very briefly cover. I talked earlier about those those challenges around Doing networking within containers more or less the the challenges that we ran into where You know the the L3 and L2 agent interactions very complicated They really do need to be singing from the same sheet of music as far as the host networking goes and You know when a default Docker configuration when we first started investigating this you're gonna get your own Process namespace as well as your own net namespace for every single container you create and that creates a lot of complications when you try and take The container and put them into try and take the networking model put into a container because what you end up with is Multiple models running in each separate container. They're not talking the same language to each other and so they're not gonna interoperate nothing works so You would not do this Okay, so so with kubernetes has a really unique property I mentioned earlier that has the ability to Deconstruct services really easily the reason it does that is because it creates a shared network namespace. It's a really cool feature kubernetes where You can then create these different containers where they're able to look at the same networking properties So you're able to get isolation of the individual open stacks components within the service But then you can still you know get them to talk to each other on a single host very easily exactly So for things like the network or node which people are familiar with where you're running the L2 agent the L3 agent You're running OBS DB OBS vSwitch D You know all those things need to be really deployed as a pod. They need to share the same namespace But the idea is that it's possible to do this this this slide is maybe a little wishful thinking right now kubernetes does not have the ability to provision multiple interfaces but the idea is that you could take You could take multiple containers and have them look at the same network properties Yeah, so just real quick slides are available there. So if you want to pull them down you could do that And I think you know we're gonna take some questions now. Yeah, absolutely. We've got three minutes left So if you have any questions, please go ahead. It's fleet. Yeah fleet. Yeah, so so the stuff we're using here I think James mentioned very briefly that Tom it concludes at CD. Yeah, so we are using at CD under the covers here as well, right? So some of those are common components a fleet is just another scheduler and You know we just Red Hat partnered with Google to release kubernetes and solve problems that Google is seeing based on the scale that they're deploying it It's basically two ways of solving the same problem. We think that kubernetes has a brighter future In the front, please. Yes, I did go to that talk Right so the the IBM talk I went to a history I'll speak to it so the IBM talk is focused more on You know very focused on Docker doc rising services and then creating a lot of custom hooks and scripts that do a lot of Actions and then their front end was based on shipyard and it was a cool solution. I can admit they have their very UI based Kubernetes is right API based. So we think that kubernetes has that advantage And we also think that kubernetes has a lot more robust features When I talk about like how it does that shared network namespace stuff and how it sits all that up for you Has a concept of pods and services that's stuff that you're not going to get from The types of web interface that shipyard is providing also to jump on that I mean I think there's going to be multiple ways that people are going to leverage containers across open stacks So getting a common understanding of how everyone can build them the same is a good idea Right and then anyone can plug in whatever deployment mechanism They want as well, right so while while the call of projects providing kubernetes templates, right? Certainly if somebody wanted to come along and provide Means by which they could deploy in other fashions. I'm sure that's no problem Are there any questions? I think we have time for one more one more Eta and atomic I believe it will be next or next year Next year can't get more specific than that Okay, and that's all time. We have folks have any more questions. Please come and see us