 Hello, everyone, and on the last session of the last day of Summit, I think everyone will agree with me that it was quite a long week. We are running on fumes, and I even more appreciate you coming here and joining me to on the update of our glorious project that Kola is, Kola-themed one. So my name is Mihail Stremski, I'm a PTL of Kola. You can find me on IRC, that's probably the best way to contact me, INC0 and OpenStack Kola, and let's get to it then. So probably most of us know what Kola is, but let me rephrase it. Kola is a deployment tool of OpenStack with Docker containers. And our full mission goes like that, Kola's mission to provide production-ready containers and deployment tools to operating OpenStack clouds. That means we want to provide images for other people to consume. And by that, I mean, and we have three projects under Kola umbrella or deliverables. Kola itself is just image-based, I mean images, image definitions and Docker files, you can find them in search Docker, and some building infrastructure for them. Our images are, I'm sorry, it was long day, images, that's what Kola is, and how to build them. And deployment mechanisms, we also have two of them currently, we deploy our Kola images with Ansible, that project has a surprising name, Kola Ansible. We also have Kola Kubernetes, which, well, we deploy it on top of Kubernetes using HALM. So a little bit of history, and our project was initially conceived around Paris Summit, and back then there were like two people, I wasn't there, I joined project around Vancouver Summit. And since then, we grew. Currently, currently a currently the latest user survey that came out in April, we are at 15% of production or test-phase deployments, which that spans between 4% production and 9% in test-phase. And we have very good growth of interest rates, because currently we have pretty steady growth of 50%, more people are being interested in our project, release to release. So I think that's fair to say that we're doing something good, right? So yeah, almost one third of OpenStack community expressed interests with Kola project, according to the latest user survey. So let me start with community update. To me, projects are about community, they're not about code, so I think this should be first one to show. In Kola, we value diversity in our community. So far, we don't have any rules for it, because we never had to, because currently no company has more than 20%, no single company has more than 20% of our reviews. That means security, that means we're not driven by some hidden agendas of our business people, we're driven by community. And if one company will figure out, hey, we don't want to do OpenStack for whatever reason, Kola will survive. We're very globally distributed. In fact, I don't want to say for sure, I don't have the numbers, but by now I think most of our core reviewers will are from either Europe or Asia. And top reviewers for sure, it's, we are, I mean, yeah, globally distributed, sorry. So yeah, and during Pike release, we had more than 30 different companies contributing to Kola. That goes to diversity. In Ocata, we had almost 180 reviewers total, granted, most of them did one review, two reviews, but if you multiply it over 180, and a lot of people actually did much more reviews, that gives you quite a nice number. And Kola also shows how much we grow. Over 30% of reviews are done by others. And what that means is in terms of, this is in terms of companies, which means not only almost 20, no company has more than 20% of reviews, a lot of people that are unaffiliated or just happen to come over is quite high, the number of people who did reviews. And we have almost 50% number increase in reviews in Ocata since Newton, given the fact that Ocata was much shorter cycle. Yeah, well, 1500 commits in Ocata, 200 more than Newton during shorter cycle. That kind of shows how much we're growing as a product. So this is the slides that apparently is a thing in product updates. This is what we'll be focusing in Pike release. Okay, I marked this minor focus because there's really not much to be done on our side. It's most of the projects we currently are proven to be deployable. Like images are deployable on a couple dozens of thousands of servers, the biggest one that I know. And Ansible is proven to be working well around 300 nodes at least, we haven't tried more. And well, this space covers more than 90% of OpenStack users. So I think that's quite a good number as well. Resiliency depends what you mean by resiliency. We deploy OpenStack in a way that, I mean, we really are as resilient as OpenStack we deploy and that's up to user, that's up to projects really. But we do care about manageability because that is well within scope of deployment tools in my opinion at least. We want to provide, we want to provide day two tool sets for operators. I mean, we go beyond deploy. It's not Firefox yet. We want to provide, I mean, we do provide upgrades. We provide reconfigure. We have some tools to help with fixing the certain, to fix certain issues that may or may not happen, that may happen in your cloud. Modularity, again, not really deployment I think. Interoperability. That's actually also very important in deployment tools because one of the features in Cola is that we provide multiple distributions that we can pick the distribution that you're more familiar with, inside containers. And even that, even that, within that, because we are running in containers, there is some distance between your hosts so you can run pretty well. I've seen Cola being run on Arch Linux, on Gen2, on CoreOS. And within inside containers, we have a, currently we can build containers with Oracle Linux, CentOS, Ubuntu, lately Debian, Debian in both Arch and Power, just not just AMD 64 and REL, yeah, we have REL, I'm losing count myself. So yeah, I think interoperability is a major focus, will be a major focus. And I think we could adapt for it anyway. Security always is and will be our major focus. We take pride on, I mean, we try, of course, to deploy the most secure cloud that we can post, that we can, the best to our knowledge we can deploy. For example, they will take measures to remove a common areas of common vectors of attack by, for example, binding just to correct IP, every service opens up, we deploy the most bind to our IP that is given. So yeah, this is just one of the, this is one way that we try to limit number of, number of problems that security issues that you may find out. And user experience, well, we are as good as uncivil or Kubernetes, but Cola Kubernetes has, can work on user experience and we do, that's a different topic. And that's our Queens, Queens plans for Cola, user experience became my major focus, probably, I mean, Queens far away, so it's, but in Cola Kubernetes, user experience is going to be work on, and it's being work on as we speak. We also, we also will focus heavily on increasing, improving our documentation, which is, you know, user experience, but I, this is probably, this is one of the most needed places when we need help, documentation. So project white updates, Ocata was very interesting release for us, because before Ocata in Newton, we have just one project, it's what, it was called Cola. And within Cola directory, we had Ansible directory just to be able to deploy images that we built with Cola, just have way to deploy these images. However, once we arrived to a place when we felt that we are stable enough and we have community strong enough to move to the, to our next phase of Cola evolution, we did. And by next phase, I mean, I mean, we split out Cola, Ansible part of Cola to Cola Ansible. And this way we want to, we want to encourage everyone, every other community to take, use our images as they is without Ansible. You don't, you like puppet, please deploy them without it. And actually, our images are being used across, across multiple projects. Triple, for example, is using Cola images. So, just to show you how, just to show you that you don't need to be involved with Cola heavily to be able to use it. So, yeah. And, you know, I encourage everyone, if you like puppet or chef, whatever, we still have space in OpenStack for more projects. We can, you can do it for, you can create the community around Cola images on your own. So, let's, what happened during the Okata to images? First of all, we have 15 new services. One of the main tasks that we, I mean, we want to deploy, we want to deploy BigTent. That's one of the Cola goals, standing Cola goals, we're still not there. BigTent is quite big and it gets bigger. And we also have lots of non-BigTent projects, except for example. So, yeah, during this three months of Okata, we increased the number of images available by 15. And first, you use default user, that's. I was kind of thinking whether it's a feature or not, but it was big enough to, I think, call it out. So, deploying with Docker is something relatively new. I mean, it's already a few years into it, but we still, we're still finding, like, when you are serious about it, you'll still finding the correct sense of deploying stuff with containers. So, one of it was the, one of this was the UID issue that we found. What happened is, when you build MariaDB today, when you install MariaDB today inside container, it will create a MariaDB user which may have ID 1,000. You build Newton, user ID is 1,000, MariaDB is deployed to create files with user ID 1,000. And when you build tomorrow, because you want to upgrade to Okata, the MariaDB user ID will be 1,001, for whatever reason, it's semi-random. So, when you do an upgrade, your MariaDB loses the access to the data, which is, you know, not ideal. So, that's just one of the issues that, I mean, we fixed this issue within COLA. And just wanted to show that deploying with containers has its quirks, and we're working actively to fix them. It's more than just installing packages. Yeah, and what's our goals? What bike? Well, we want more images. As I said, BigTent is big, and we don't, we're not done yet. We need more packages, we need more applications, and I'm pretty sure people will, people keep adding different images that may not be BigTent, like we also have ATCD, for example, being added. I can't remember that quite, I think it was in Newton. So, non-OpenStack services, especially with the new, with new kind of, sorry, with new goals of OpenStack community, which we'll probably have a good thing on keynote, OpenStack is meant to be reused not only as whole, not only as whole set of compute, of compute kit, but just in there, just, you know, pieces of OpenStack to be used. COLA may be very well fit to this position, to this model. So, the other big one, I think, is at last, we want to create a source of images. So, COLA is meant to be building images, but not everyone has to build images. We have repositories of images, and we want to create a list of, we want to create a source of images for other people to consume that are built, that are, that past CI of COLA. And, you know, when you build them in your own lab, if you have staging environment, if you have CI environment, that's perfect, build them all. But not everyone has this sophisticated staging slash CI system. And by having some set of images that were passing COLA CI system, that's what we, that's one of the goals that we want to achieve. And we will do master and stable branches. And the other reason for that is that will enable really users to live in this colleague rail of OpenStack operations kind of, and that doing constant upgrades, like upgrade every day. And that will allow people to just, you know, constantly upgrade. So when some security patch was, will be released in one of the underlying services, you'll get the security patch right away. COLA Uncible in Okata. Reconfiguration optimization. If you run reconfigure back in Newton, it was slow. That's just an uncible thing. I mean, that's us multiple things. But we noticed that, I mean, this is one of the feedbacks that we got in feedback session in Barcelona. And while we got to quite a long lens, reorganized lots of code to make it better. And I think it's better today. So just want to emphasize this as a kind of, you know, show, send the message that yes, we want to listen to feedback. Yes, we want to fix it. And this is incredibly important to hear from users. What are the issues of COLA and how do we fix it? Changing HECA to Fluent D. So this goes to the logging infrastructure that is deployed alongside COLA. And within that, when you deploy, when you deploy COLA with centralized logging, you are able to right away get elastic search and KIVANA. And we use what used to be HECA. HECA was a Mozilla open source project that was deprecated, which means we needed to switch it to something else. After quite a discussion, we stopped at Fluent D, and now we have no HECA left. We also added 11 new roles. Role in Uncible is a separate service that is going to deploy, which means now we can deploy 11 more services than we did before. Conceivable pike goals. That's, first one is better gates. That's probably the one goal that is standing, like we always improve our gates, but we always can have more and better. And one thing that we definitely need is to more deployment scenarios, deploy more services in our gates. There's different use cases, different sorts of reference implementations, and make sure that they work with, you know, we won't break anything. Another thing is we want to have a full upgrade gates. And by that, I mean every commit to COLA, which is more than once a day, will be deploying latest stable branch, will be upgrading this to master, and that will not only ensure that COLA didn't break our upgrades, but will be able to also provide feedback to the project teams, let's say Neutron, if something happened to upgrades on Multinode, we should be able to see that and will be able to help them to fix upgrades because before the actual release happens, so the users will not have to suffer to them. And that also resonates with Multinode gates. And currently, we deploy only one in COLA Ansible. We want to deploy, well, Multinode, HAA, OpenSpec. So, yeah, again, lots of things appear when you deploy clustered. And we want to catch those. Another thing is documentation improvements. I cannot stress that enough. We need better documentation. That's probably the weakest piece of our project. And one of the things that I would really ask people here to give us is use cases. We want to know what are you deploying. We want to make sure that whatever you are deploying works well. And we want to document the most common use cases so other people will have easier time going through the pains. Best practices is another thing that we discussed. It's like use cases, but not really. It's, okay, so I have this happening to my containers, how do I fix them? Or don't do that. Use this kind of versions of packages or like Docker. All that operators' tricks and tips that is extremely useful. And easier quick start. It's quite easy today. It can be easier. We'll be working on it. And we already have some materials that are even easier than quick start. And, yeah, we're working. And also more roles. Again, big tent. We want to deploy more. And it's well within our scope and goals. And color Kubernetes. That's a new kit in the block. Everyone is talking about Kubernetes now, so we don't want to stay behind. And we made quite a lot of changes since Barcelona Summit. You may have seen our presentation during Barcelona Summit when we made live demo of color Kubernetes when we actually deployed full-fledged Kubernetes on stage. We removed almost all of that code since. We moved to Helm. Helm is a package slash templating management tool for Kubernetes. It's within Kubernetes namespace. It's a separate project with a very cool community. We were working with them, implemented a couple of features to Helm to enable deploying the complex beast that OpenStack is. And currently, we're using Helm to provide resources to Kubernetes to deploy full OpenStack. So what we do, we went to use native Kubernetes templating mechanism, which is Helm. We used something called, we call it micro architecture, micro service architecture. There we have lots of discussions about how to do properly Helm and how to arrive to sort of flexibility that will require us, our users, to deploy different OpenStacks. For example, with Kola, we want to enable brand field migration to Kola Kubernetes as well. So what we did is we created micro service architecture. And my friend described it to me in a very interesting way. I really like it. It's like you have all the puzzles and every service in OpenStack, Nova Compute, Nova API, Neutron service is a puzzle piece of it. You just throw it on the table and you pick what you want and you assemble them to OpenStack you want. So it allows you to ask much flexibility as you want. However, we also have overarching charts like ComputeKit chart. A chart is an equivalent package in Helm that allows you to, if you want just generic OpenStack, it's going to be one command. If you want to deploy every single thing manually because you want to migrate from your brand field deployment and put a service at the time to migrate to the new service, it's also going to be possible. We want to have this cold and middle ground between ease of use and flexibility should you want it. So one thing that we did is we did kind of test-driven development. One of the things that I'm extremely proud about Kola Kubernetes, what Kola Kubernetes community created is one of the most sophisticated gates that I've ever seen in OpenStack. Every commit to Kola Kubernetes will be tested on multi-node gates on self-backend the way it should be run in production in our opinion, which means every Kola Kubernetes being deployed in OpenStack's infra couple dozen times a day in a production-ready architecture, sort of, as much as we could achieve with the limited resources that infra offers. However, it's really, really solid gating system that enables us to ensure that everything that we do will keep working and will not break, will not break it. So yes, this was very painful at the beginning. It was a lot of work to make it to at this point, but right now it's really something to be proud of. And we do have an ambitious roadmap for first release of OpenStack. We created a roadmap on Atlanta PTG, on pack PTG. And one of the, and just to highlight, like the list of features that we want to have on 1.0 is quite long and it is quite complete, but just to highlight few easy security upgrades that will also resonate with Kola images publishing. We want to be able to run Kola Kubernetes in a way that will be almost constantly upgradable. So again, security patch comes into, from the security patch being published to MariaDB to being deployed in your OpenStack, we want it to be less than a day. We'll build every day on images. You run and deploy every day and upgrade every day. And that's going to roll in the security upgrades without you watching them extremely easy, extremely fast, which is critical. We want to start with release upgrades. Most of the release upgrades are not that hard. I mean, not with containers, but there are few. Namely, probably the highest upgrade was Newton Tocata with placement API being introduced. And we want to be able to upgrade that kind of. So since we're deploying first, since it's going to be our first release, we don't really need release upgrades in a way that you deploy 1.0. We need upgrades from 1.0 to 2.0. And that's why we do prototype, we want to make sure that nothing will stand in a way. And we're pretty close to have actual upgrades, to be honest, because, well. And brownfield friendly, as much as possible. Yeah, the brownfield will always be painful. That's just how the life goes. But we do want to, we ask, we, I mean, there are lots of OpenStack already running. And once you run OpenStack, we don't want you to be left alone. And we want as much as possible to be able to migrate from existing OpenStack to Kolo Kubernetes. And we will actually have a very sophisticated use case for that, because one of our lead contributors is going to do that as soon as we're done on his couple, couple, here's couple of OpenStack clusters, each three or so hundred nodes. So yeah, that will be quite interesting use case. So closing nodes, yes, please help us. It's an ambitious project. We still need help. There are many ways to contribute. You don't need to write, to be writing code. We encourage to write code and encourage to review, but this is not the only way to contribute. As I said, we love our feedback. Don't be shy. Come to visit RCRC, say, hey, guys, this is not working. This is breaking. Or I don't know how to do it, please. Or we need to do it in this way, and this is not documented. We need that kind of feedback. We need to be, we need the hard, we need the hard truths. We need the ugly truths so we can make it better at the end. Of course, Patras reviews just the regular what you normally would think of contribution. As I said, we take pride of our openness. We take pride in our diversity. I would love to see more faces and more names on our reviewer list. Bugs, also, great way to, that means if project has no bugs in the queue, that means nobody uses it. So, yes. Please. And you can spread the word around. If you actually like our project, I encourage you to try. Please tell your friends. Maybe they have the same problems as you do. And not everyone has, is here in this room. Not everyone has the stamina to wait for 5 p.m. last day on Summit. I would love to see more people. And, well, we are all friends out there. So, it's never, you can never have enough friends. So, again, thank you for attending in this late hour. And I'll be happy to answer any questions you might have. So, if we're first, taking a first look at Kala, is there, would you recommend one over the other? The Ansible method or the Kubernetes method? Ansible by hands down. I mean, Ansible is stable. Kubernetes is still in development. Well, I mean, at some point, it's going to be a stable. At least it's not there yet. Ansible is stable. It's been running production. It's been battle tested. It's the one that I would encourage. In fact, if you want, I run workshops two days ago. Tuesday, it's called novice install. But Kala, well, video is published. Materials are out there in the internets. We have quick start guide. Feel free to try it at home. And if you run into any problems, just give us a holler on the IRC channel. Awesome. Thanks. Great talk, Michael. So, in Kubernetes, I think it's very famous because they don't build the Kubernetes containers, right? Just pull it and start it. The scheduler, the controller, HCD, CNI, we pull it and start it. And I brought this a couple of times in meetings as well, and we talked about this session yesterday. What is the roadmap in Pyke to make Kala just pullable and startable and have a mechanism to automatically push our master, building master containers? Right now, we can only pull our golden stable. But if you're an upstream developer, you don't care much about stable, right? I mean, you want to be master, right? And you have to spend four hours on your laptop, building 100 containers, or at least the changed ones. So, what we're doing today is every time a commit passes Kala gates, every time a commit is merged, we build all stack of whole stack images of images. They are tested, and we save them in certain places, in tables. This is Hakiway, but that's what we're doing. We save the bunch of images in OpenStack Infra. What we want to do later is for master and all this, and at least Ocata stable, later Pyce stable, and all these tables that we will have this code, to do daily, once a day, we're gonna, this is still in process, but we're gonna push all these images that we built that day to Docker Hub, which means, in general, you have at least 24 hours old, sorry, at most 24 hours old images. And once we increase, as we will be increasing our gating scenarios, images will be more and more resilient as the same, because again, they need to pass the gates to be, to be accepted, to be saved. So, as we pass it, the more and more testing each image will take and get, and the more stable the whole stack will become, and hopefully to, at some point, you will be able, I hope we will be trusted enough so we can just upgrade the OpenStacks like, you know, in a proverbial cron, right? Thank you very much then for coming, and I hope that