 Hi folks, I'll go ahead and get started. My name is Steven Dake from Cisco Systems I'm here to present about COLA And this is Sam Yapal Sam. Would you introduce yourself please? I'm Sam Yapal And you work where I work at a company called Cervocity. It's a backup company and your interests are what I like Well, I like storage obviously because I work at a backup company, but here with COLA. I I highly enjoy Automating deployment and OpenSack is kind of a complicated thing to deploy. Cool. So Sam's a core reviewer on COLA. We have nine core reviewers and Sam is one of them He's been pretty instrumental in helping with the Ansible aspect of things So I thought it'd be good for us to both present COLA With our new Ansible deployment system So first of all, we'll go ahead and get started. Sam, why don't you go and kick off the demo. Okay The first thing that we're going to do here is Go ahead and I'm just going to dive right in and show you how we configure and deploy COLA while that's going We can talk a little bit more about the the architecture and Other things about COLA. I don't know how to switch out to SSH window It's a live demo by the way. So that'll be fun There we go. Escape. Okay. I don't know how to use a Mac. It's like so easy kind of I did it's cool. Not a big deal. I think we'll get it. There you go. All right Yes, okay All right, so I just want to show right now what we're going to deploy we're gonna be deploying on three nodes and These are the three nodes I just want to show you that there is no container running on them So we'll be we'll be doing all this fresh. I believe our wireless may have dropped out There we go All right, so I have a COLA already cloned from the Git repo The first step is to move what we have in our Etsy directory here into your Etsy directory on the deploy host So I'm just going to copy that in now And it copies in quite a few files here and we'll talk about those But the the main file that you would be doing most of your configuration is globals.yaml And that's where we have most all the options you need to configure out of the box COLA needs very few options. I believe in this case only five options to Deploy working open stack across the board And I'll go through what these options are The first one is it needs a base distro type We build against Cintas Ubuntu And then we build two different types of binary and source in our gate So those are the ones that are actively tested and gated against here. We're going to be deploying Cintas And we're going to be pulling open stack with binary packages in those containers. We could also build from a source if we choose So that way you could have your own fork of Nova or Neutron or wherever and point it to that that get repository And it'll build container based on that This option here COLA internal address, this is a VIP that's used for keep alive in our case Just a standard virtual IP address. This is what we use to talk to all of our API services with And the VIP that we're going to put on there is 148 This just needs to be a non-used IP in your network because we'll go ahead and set up a Keep alive DNHA proxy by default Should you want to you could turn those services off and use a load balancer? And then you'd be able to handle all that yourself. You'd still put in the VIP that you plan on using there For the external address We're going to put in the external address for this box that can be reached at It's called Broke.net Hoping that's not accurate. I Didn't name it probably accurate. No, no So we're going to put in the external address there and that's right now. That's really only going to be used in in Keystone, it'll show up as your External endpoint so you can connect to these these services over the internet and that's really the primary place that's used This Docker registry option you can push images to the Docker hub and for those of you I'm familiar with Docker That's just a central repository of images that can be reached and pulled from from just about anywhere in our case We have a local repository and I'm just going to put in the information to Connect to that local repository Which is at this address? What's up? That would have worked It wouldn't have don't don't do that So this is the the address in our eternal network where a Docker registry is running in our case We only need to set that but should if you had authentication or anything on your network you can Optionally set these parameters, but in our case we only need to set the Docker registry There are other options in here. I'm not going to talk about all of them These are pretty well documented and there's a more advanced options throughout but we have these In our documentation throughout so I'm only going to be discussing the the main ones of interest and the ones We're going to be using in deployment right now Well since it doesn't want to move Okay, there we go Just be patient You guys see the lights up here? No, it's a little nerve-racking. All right, so the the network interface here In that this is going to be the network interface It should have an IP address on it. This is what all your your services will bind to by default So all your API services storage network traffic Tenant network VX land stuff that that'll all bind to this network by default So you want to put an interface in there that has an IP address on that that can be contacted in our case We have IP addresses on that interface You can be more specific and specify API interface or storage and tunnel interface to To make sure that those particular traffics go over the interfaces and therefore the networks that you want them to go over One of the final if not the final things that we're going to be using here is the external interface This is one that you would typically give to Neutron for open V switch or Linux bridge This one shouldn't have an IP address because in most configurations that IP address is is no longer usable So that's why it's recommended not have an IP address on this interface. It's mainly used for just L2 traffic To switch between open V switch and Linux bridge, which there are different ways to set those up It's it's real simple here. You would just change an option And that's how most of the configuration for open stack is done throughout It's just changing a small option here in this globals.yaml and you change what you deploy The tag that we're going to be using is We using the latest tag yes, okay, so we're gonna we've tagged our images and we built them ahead of time And we used it with the Latest tag, I think we actually have a different tag, but it'll break pretty quickly if we have a different tag The first thing it does is try to pull an image, so we'll change that if that's the case We're gonna enable two services. We support Deploying Ceph automatically with these playbooks So we're gonna enable Ceph as well and this will configure open stack to use Ceph as well This is great for for testing because Ceph can be a Pain to set up as well sometimes too so setting it up automatically is a pretty nice way for us to exercise all the Ceph code that we have With those enabled I'm going to close out the file. That was it in total that was what one two three four five that was Eight options in total. I think it's eight with the sender and Ceph enabled So it's eight options and that's enough to get you rolling with with open stack and cola Because we are using Ceph, I did mention that we can Deploy that automatically one thing that we do have to do is set a special flag on the disc itself We're gonna show you just that it doesn't have a flag right now So there's no there's no partitions on this disc, but what I'm gonna do is I'm gonna go ahead and put a special flag on there I may even have it in my history Yeah, so I'm gonna set this this cola of Ceph OSD bootstrap flag This is just gonna away for Ansible and the Ansible scripts to know that it's okay to use this disc so it's non-destructive other than that so you'd have an explicit action on your part for it to try to use that disc and you see it set So after that I'm gonna go ahead and Do a deploy? Now we're gonna run time on it as well. So you'll see how long it actually takes at the Of the inventory file, let me explain that real quick The inventory looks like I've already said it the inventory files pretty simple. This is what this is what controls which host run on which It's a standard Ansible inventory, so there's lots of documentation from Ansible on this in this case We have three servers called mini me oh one two and three And I've set these here so that it knows that to deploy services on these hosts The file can be you can be as explicit as you possibly want so you can scale out just Nova API or scale out Just rabbit MQ and that's documented and how to do that in the file And here we're just using the the basic setup since we only have three nodes We're gonna be deploying all the services on all three of the nodes for for high availability But that's just the structure of this file I'm gonna go ahead and run the deploy now and like I said, I think we may have named the tag something different So we'll see It's going to start running. You see the Ansible command. It's it's run up there. It's gonna start going through and and Copying over configuration files and and finally starting containers for each individual service The the first action for starting the container should be coming up here Pretty quickly and if that times out then I know I have thrown flag Now look at that. It's a it started the container. So okay my turn So I'm at the hard part So I'll go back into our slides and what we're gonna do is we're gonna let that kick off and takes about 18 minutes to deploy and You know, I'm not gonna stand here and look at the Ansible scroll for 18 minutes. That's a kind of a big waste of time so instead I'd like to kind of explain the environment a little bit and Talk about what we've got How the system is operational Since I can figure out operate PowerPoint Okay, there we go. So this is my home lab configuration. There's two networks One is a management network. You see that's a 192 168 111 network The other network is the 10 dot 0 dot 2 dot 0 network. That's a neutron public network That goes to a public router Which goes out over my cable modem, which is in the upper left hand corner. It's broke that self IP net The cable modem uses dine DNS So it registers its IP address with the global DNS system So I can connect a broke that self IP net and be able to access my machine my my open set cluster from remote Now if you were running a real operation, you wouldn't probably have this kind of set up This is how I set up my lab. You might have a different different kind of configuration The other key point here is we have a round robin IP address of that one for eight That's a virtual IP address, which is just an IP that is unassigned We use h.a. Proxy on that address to Load balance to all of the in-way active h.a. Services that we run so we run all of open stack and in way active h.a. And by doing that We get really good high availability. The only thing we don't run in in-way active is the database because the database locks up I'm sure many people are aware of that So that's my home lab configuration Here's the NAT setup that I've got in my environment the NAT setup Basically, what happens is we're gonna access my cloud From Japan here over our wireless wonderful wireless at this conference We're gonna access my cloud at home. The way for that to happen is through NAT So on my wireless router, I've got NAT set up the key thing to point out here Let me step off the stage The key thing to point out here is This device IP is 192.168.1.148 These are all the these are all of the different services. We've got Horizon Keystone Glans Nova Neutron Sender Heat and Keystone Admin So you can set this up any way you like. I would expect that a proper deployment would probably use NAT as well maybe with real gear not a WRT 1900 so This is kind of a guess. I don't know specifically But that would be my expectation I'd like to talk so that's the that's how the demo is set up Very straightforward Once the the demo takes again about 18 minutes once we get done Sam, I'll show you some of the activities of Using the deployed cloud and now it's actually operational in our 50 or 60 containers that constitute Kola I'd like to talk a bit about the community So one of the cool things about Kola is we have the team diversity flag So I don't know how many people in this room are familiar with the governance repo But essentially when when OpenStack went through the big tent they kind of added this thing called tags So a tag could be assigned to a project So operators could determine if the project was suitable for them one of these tags is the team diverse affiliation tag or something like that I don't quite recall and This tag what it represents is it means a lot of different companies and people are contributing to the code So one individual company were to drop out of contribution from the project the project would still survive and be pretty healthy Kola has a really nice diverse affiliation You can see that from our diagrams of our reviews and our commits You see a lot of different kind of big names and computing there and it's pretty even the pie charts pretty even I'd like I'd like the pie chart to be even more even I like 10% from 10 contributors versus like 21% from Red Hat 20% from Cisco I'd like that to be 10% for from like each company But still we are one of the most diverse projects inside of OpenStack probably the most diverse is neutron But we're very diverse and we're a brand new project So it's very rare for a new project to have this diverse affiliation tag If I if I were an operator and I were judging whether or not to use a project in my deployment This would be the first thing I would judge is a diverse affiliation if the project was diversely affiliated Because that means the project will last a long time I'd also evaluate whether it met my technical needs, but assuming it does I would look at that diverse affiliation So what is Kola? Kola is a deployment system. It deploys Docker containers. The Docker containers contain OpenStack services and OpenStack infrastructure services like MariaDB And When it deploys them it uses Ansible to deploy them our Ansible code base about 8,500 lines our Docker code base about 3000 lines in all including the documentation Kola is about 15,000 lines of code. So it's very small very tidy But it's very expressive. We can do quite a bit of work with it. You can see I'm not gonna read off the whole feature set there I think the key things that I like about Kola is that we have good real real real positive N-Way HA it works really well. It's very very fantastically designed Also, we support Cep implement Cep Cep implementation. It's pretty new. It's probably about two weeks old But I think if you're deploying OpenStack, you really have to look at Cep as an option And if you're not looking at Cep as an option, you're increasing your pain threshold quite a bit Or you have to have a nice pain threshold to not have Cep So Cep solves a lot of real problems with OpenStack It solves the HA problems of your storage It solves not having to store everything across a bunch of different nodes It solves not having to back up from different machines. Everything's centralized So Cep is really nice and I think we're the only deployment tool that deploy Cep. I don't know if that's accurate or not But I'll go with that We have really good distro choice. So we support Ubuntu and CentOS and REL and Oracle Linux And in fact Oracle is just released a product based upon Kola and so I get asked this question a lot Is Kola, does Kola have any real production deployments? And the answer is no because we're pretty much brand new In terms of our implementation, but I expect through Oracle's Productization of Kola there will be a quite a bit of new deployments And I would like to see Kola targeted at the hundred node deploy Environment and maybe like three racks something like that with networking and maybe there would be a three rack deployment I think this is pretty common size for deployments. Maybe at the large end So I'd like to see that that's where I'd like to see our target hit and In terms of kind of other features we have we're very Anti-dependency as all good software developers should be in my opinion. You should have good dependency management in your software And we only depend on Docker PY and the Docker engine Or whatever distro packages in Docker for the deploy target nodes So we don't have to load a bunch of software on there to get to get things to work And this works really well for for distros like atomic or distros that have a read-only us our file system Because you you don't have to get them to distribute a bunch of dependencies instead You just get them to distribute Docker PY and Docker engine and everybody's doing that already if you're delivering open stack Sorry, if you're delivering a container operating a system, which is like What which would be these types of operating systems with the read-only us our file system? You're gonna have Docker Docker PY on the system. So I think everybody's had a chance to read through the slide So I'll move on. I just want to point out that in closing here that We really deploy a lot of the services a lot of the major services of open stack are deployed today They're functional today. They work well today We deliver open stack as it was probably a year ago. Maybe 18 months ago Completely where we're going in the future and we're gonna deliver the big tent entirely and the big tent is going to We've got about 15 services to package it takes us about 15 20 hours to do a service Okay, I do I do want to interrupt you. Oh, yeah, go ahead, please Yeah, I just wanted to point out, you know live demos and what they are I did actually have the wrong tag before So I went back and I changed that tag and continue to deploy the time's not going to be accurate because of that It it stopped after the the ceft deploy. So I did change that tag and started it up again So, I mean we can continue from there. Okay, good So we're solid now. Yeah. Yeah, it's okay. I'll move on with the presentation Live demos for the win. Okay. So cola principles So I just talked a whole bunch about community, but our project is designed by the community It's not like some rocket science somewhere said, okay, we're gonna go design cola You know, we we've we've gone through every kind of permutation you could think of with Docker We've used Kubernetes We've tried a bunch of different ideas and we've really settled on a very nice simple implementation and It's all been designed by people that have a common interest in working on Container technology working on deployment because in my mind deployment is the biggest problem open-stack basis today And everybody that's part of the core team. I think they have that common philosophy So we have a very common community there It's not just our nine guys on our core team that write the code though We also have about 30 other people that contribute code over time Maybe like five or six commits a real release. That's that's not a lot of commits but it's it's a contribution and I Consider it very highly That people would take the time out of their day to contribute to cola. So cola is designed and implemented by the community It's one of our key principles if we didn't have community what would be the point? We might as well just be a proprietary software product and we might as well just Jam our stuff in a private repository not even use github You know just have our little private Repose we don't do that. We're very open very community oriented if things don't happen in the open I get very upset You know the ptl change on the ptl of cola if the ptl were to change the cola that might change But for now, we're very oriented towards community So we're designed for scale. Let me explain what I mean by that I don't mean scale of the deployment. I mean scale of the project So we're designed to scale the development effort very quickly Because we expect over time open stack will grow much faster than it's growing today So we're designed we've designed our project in ways that May not be optimal technically, but they're optimal in terms of adding new contributors Things are very simple things are very straightforward very simple to understand I'll get more into that later in a later slide But we're designed for scale in terms of our developer community in terms of people Joining the project and learning about the project. That's how we're designed We haven't totally delivered on that thing on that area in places like documentation But we had a great session at summit about that and we're going to prove our documentation So right now I hear complaints that we have too much documentation So I have never heard that before but that Okay, so we're designed for choice. I talked about our distro choices. We have good distro choices We also don't force people into packaging or source only Let's you choose what you want now sometimes you can't have everything so for example on RDO There is no Murano packaging. So if you want Murano, you have to deploy from source or you have to deploy everything from RDO and deploy Murano from source Separately and build it separately. So we can support that model We can support this changes of Ubuntu versus CentOS on the same environment with RDO and source all that stuff would work I wouldn't do it personally. I think you should stick with one choice make make a choice for a reason It's a good choice. You stick with it. But if you wanted to you know, mix things up, you could so we're designed it for choice Now our project is executed Exception-free now. I'm not talking about like Python exceptions or anything like that. What I'm talking about is people Some people in life make choices and they they make exceptions. They say well, I'm only gonna do this this one time And let me give you an example early on our community we kind of were at this point where we were developing our software and You know, we didn't have enough reviewer core reviewers to review this stuff I said no, we're not gonna drop down to one reviewer. We're not gonna let one reviewer because that would be an exception So instead we say okay every patch has to go through two reviews Every code has to be community driven. Those these are what I mean by acceptance We don't we don't cut corners on those on those things because if you make one exception Then it's easy to make another exception and then it's easy to make another exception And that could harm the health of the project that could harm the health of an organization I think except exceptions are the way to the dark side personally So this is something I've really driven into the project with my involvement So those are our principles. We have more probably Sam could tell you all day long about DRY, but I won't get into that I'll talk about our technology a bit. I'm gonna tell you about the two things you have to learn to develop for COLA so this is the first thing this is a JINJA 2 template the JINJA 2 template basically The only thing we use from JINJA 2 are the conditionals. So this is a Docker file the Docker file Docker file people that are Docker didn't add conditionals to To Docker files because it complicates Docker It would make Docker much more hard much more difficult to implement But if you implement it outside of Docker Then it makes it pretty easy to implement using JINJA 2 and that's how we implement our conditionals This is how we get all of our distros Such as Ubuntu and rel into the same file because we want to be able to look at something like heat Which is what this is an example of all in the same place We you know if we want to heat if we have a heat base file We want heat to we want to look at heat in one location not over five different files This is a evolution before we had a bunch of sim links and a bunch of separate Docker files for each different Distribution we have like a thousand sim links or something. It was crazy. It didn't make any sense So this this is not necessarily a best practice, but this is something we do and I think it's a really beneficial For Docker if you use Docker and you want to support your software across different System call interfaces for Ubuntu and rel for example. This is what you have to do And this is a solution you have to choose now. This is what we use This is one of the two technologies that you have to learn to develop for cola The other technology is our ansible orchestration methodology and here it is You see there's three simple steps there Essentially, I'll talk through them briefly what we have to do is we take a configuration That is our custom configuration and we take the default configuration and we merge them together We start the container the first time. Let's say we're starting a service like Lance We start to glance container the first time we bootstrap it by bootstrapping it. We initialize the database We initialize the database users once the containers bootstrapped We wait for it to exit and then we start glance now bootstrapping doesn't happen all all the time It only happens one time at the first boot of the deployment So we don't expect bootstrapping to happen very often. We expect that to be a very rare occurrence now Why do we have this method? Why did we have this process because it's simple? It's straightforward It's repeatable. It's a pattern and we want to use this pattern over and over and over now There are some things that don't use this pattern probably Seth doesn't I'm not quite sure on Seth but there may be there may be some exceptions not exceptions, but differences into Because we can't use this pattern on Seth for example So we do something a little different But we still try to follow the same model throughout the code base So if you learn this you can you can add a big tent service Our goal is for people to have typically people have four hour blocks of a day Maybe two in a day if they're lucky to work our goal is for people to be able to learn to contribute to cola in three four hour blocks, so 12 hours of work Knowing nothing about Docker knowing nothing about Ansible and I believe we we've met that So if you want to contribute to cola a great place to contribute it's easy And there's we've got a ton of work to do we've got so much work It's not even funny, but today cola is deployable and usable in the field I'm really excited to see what happens from Oracle's cells And what they do in the field and what they produce bug-wise for us to fix Because I don't think there's a lot of bugs because we have a very small code base and because our code base is small There's less bugs. There probably are some bugs. You know, I don't make bug free software Okay validation so We use standard gating we gate our builds so every container is gated on its build If we build if we have a container we have about 95 containers or something like that We gate those containers on the build so before a commit can go into the repo as built Now we have we have an override file where we say okay these containers We know they don't build for this distro or whatever and we can just turn those off So we don't always gate consistently, but we have control over when we choose not to gate so we have that we Deploy a subset of containers so we have those 95 containers But we don't have ansible code for all of them advanceable code for maybe like 80% of them So we do the we deploy with ansible for those 80% of our services We have and we only build those containers. We need we have some technology called profiling Which allows us to profile and build a profile of things that Would be like compute kit for example if you wanted that instead of a full big tent deployment So that's that's kind of our validation What we want to do next Something Paul Bork is working on is tempest validation in the gate. So we actually validate the implementation Running inside the gate after that we want to go from one node to two nodes and then advance on to maybe five node deployment With three node HA and two compute nodes So we want to we want to use open stack infrastructure and open stack gating to make sure that our software Comes out working. Well because it's really hard once you make a change to refactor it in a way That is suitable to everybody because the change went in the first time because it was good and To get it to go in the second time Because you need a bug fix is harder. So we like our code to pass that validation before You know gating for those people that don't know about gating gating is kind of a fundamental shift in how software should be developed and Open size really leading the charge there and we're really engaged in the open stack infrastructure team to make our gating work well Okay, Sam demo. Okay done the plan. It's not quite done. No, I had a fat-fingered that That tag. Oh, I can switch over on your laptop. Yeah, let's do that Yeah, right now. It's it's all about cinder So we have as a small script to you know put up the Sears image create a small network Since all those services are deployed I'm gonna go ahead and run that because I don't want to run out of time and showing you that It's a it's a functional open-stack environment, but right now it is on You'll see it in a second here, but it's on Cinder it's about wrapped up with that So at this point glance Nova Keystone Neutron Ceph MariaDB rabbit those have all been set up. They've all been clustered and I'll run through I'll run through what that looks like As we're waiting for for heat to install so we can show you a heat demo there So well as I said, it's a it's on cinder right about now I'm gonna switch over to the view that I had of the three other nodes So this is all three nodes before I showed you there was no docker Images containers running now. You'll see there's there's quite a few of them that Formatting is less than ideal S-capital S That's even worse. Pipe it through a WC-DashL You just want to count the containers that account the containers. Let's do that. It's a wireless Always the wireless. It's always the wireless. It's been dropping out on me I'll take care of that Sam keep going live demos, you know, they're they're what everyone should be doing So 36 containers are running right now those each are microservices. They each have one process running in each So he's deploying right now, I'm gonna go ahead and like I said run the script that we have What it's named What did you name the script? It's in the demo directory, okay on your laptop top. Yeah Oh, there it is Okay, so you see the script here. It's got some comments in there But it creates a glance image creates the neutron images Nova It's a fairly basic script that just sets it up the way we want to set it up So we're just gonna run that now need to source credentials How many times that happened all of you go to time every time every time So it's started obviously running. These are how the API call returns that should be some indication that it's working We're gonna jump back over the deployment now and see if that's wrapped up Okay, so like I said the time is a little fudged in this In this in environment it takes about 18 minutes So it stopped about halfway through because of the tag issues, so I corrected that it took about 11 minutes to run But in total you should be expecting about 18 minutes. It depends on your hardware, of course my hardware I can deploy all this in about eight minutes It depends on the the registry as well because you're gonna be pulling down these images So over a local network is is ideal for speed if you're pulling them over the Docker hub It's gonna depend on your bandwidth and other factors like that But at this point all of the services should be up and I'm gonna go ahead and test a few of them You got too mad Sam. Okay So we already showed that we were we're hitting the APIs and doing things. So you see the service list These these agents are up on your you have a heat I'm not sure where on your laptop. You have my directory the demo director So we're gonna run this launch script. This is launching instances with heat So open our see again So what's really cool about this is we're using heat to launch VMs and heat uses Nova and Neutron and glance and all of the other services that are part of open sex So if this works and doesn't implode then, you know, you have pretty good confidence that open sec works now I think it would be interesting to see if you did this a million times how many times I failed For me, it's not really failed in the demo unless I did something wrong as part of my work Now you we see it here. It's Nova started up a bunch of VMs Sam what you did the novelist pipe it through the WC-L so we can see that the active Instances see how many there you go. So that's just gonna search for ones that have kicked in active state I don't believe any have Yeah, we've got seven in active state already. I believe this launch is 10 32. I think 32 Okay, well lunch is quite a few instances and we'll be checking in on those I Believe we have two minutes left here Did Let's do horizon real quick and if folks have questions, please queue up at the microphone And we'll take a few questions while we still have time If there are a question Well, Sam shows us horizon all answer questions and if the core team could come up real quick I just want to introduce the core team Briefly and please go ahead and ask your question Hey So my question is you said that you support the plane from source. So how do you handle dependencies? I mean all those open stack services something dependent various Python libraries We should also have to I would assume be there in a container. So do you handle that? Do I handle that? The way it works is through the requirements file So the requirements file it specifies the dependencies of the of the service and we just install the requirements dependencies So we we rely on the upstream open stack projects to maintain their list of dependencies and the versions They need so that's how we handle that for from packaging like RDO the distros to handle the versioning Okay, a second question Do you integrate with existing kubernetes environments say say already have a kubernetes cluster I would like to use to schedule your containers. Yeah, no, we don't we don't integrate with kubernetes We tried kubernetes. It doesn't work for us because there's no net host functionality There's no PID host functionality, which is what we need to implement Deployment of open stack services. So the kubernetes community has said they don't want those functions those features and kubernetes because they're security risk We deploy on bare metal. So it's not a security risk for us But if you're deploying in kubernetes in a shared environment, it could be a security risk Okay, thanks. I It's it's spinning. Well, I think that's right Still loading We've got we've got Martin Ryan Paul and Michael up here. We're They're the other core members for us and we have some more core folks, but they're just not here at the moment So yeah, I'm nine in total. I'm probably gonna be adding more surely. All right. You have a question. Yes This is an interesting project compared with the open stack Ansible My question is okay for the open stack Ansible this support by Rackspace. It's mean that it have the real world example with Really high high load. So yeah, I want to compare to this anyone has using this with the Yeah, really high throughput and really high load like 10 or 100 server Yeah, as I mentioned, we haven't deployed openscola at 100 nodes So we don't know. I don't know if OSAT has either. I I don't have that information I would expect Kola would deploy to 100 nodes really easily I don't think there's gonna be any problems if there are they'll be very easy to solve But we haven't done it so I can't say for sure that it would work But I would think it would I don't see any reason why it wouldn't work where we run into trouble with Kola is On the scalability is the database and rabbit MQ and do those scale You know and then you're talking thousands of nodes not hundreds of nodes where the scale limits are Okay, I'm not good question Your script is now supported. I mean scale the node like if you want One more compute node. Yeah that already included. Yeah, that all you have to do is add it in that Ansible inventory file Just add another one and run the playbooks again, and it adds it no problem You can do it with all the services including the database and rabbit. Oh, okay. Have you tried that? Thanks. I think we're out of time now. Yeah, we're out of time folks. Thanks for coming. I really appreciate your time I hope you try out Kola. I would if you if you need help come to the pound Kola channel on free node We'll help you out. We'll get you started. I will help you evaluate Kola I really want deployments of Kola So if there's anything we can do to make that happen let us know and we'll be willing to help there Thank you. I appreciate you