 So, good morning everyone. Don't even mention Praha. Good morning Praha. It's great to be here. How's everyone doing? Excited? Yeah, great. I'm really excited to be here. And this morning, I'm really excited because we're going to break down barriers to enterprise adoption of OpenStack. Who's excited to get into that business? Woo! Let's have some fun. Great. So I'm going to share our journey, our operational journey to redefine OpenStack operations. So we're getting into operational enlightenment here today, but I don't think that is the best bit. That's not the bit I'm excited about. Who loves live demos? Live demos? Come on, let's have some live demos. So we're going to have some fun. Everything here is live and we're going to be doing it. Let's have some fun together. My name is Lachlan Edinson from Lithium Technologies and I am a cloud builder. I'm a cloud deployer and I'm an OpenStack contributor. And this is Young and Public. Here's also in the same team as me. We're both deployers, contributors and you know what we love the most? Eating our own dog food. So everything I'm sharing here today is from experience. We've been in the trenches fighting the good war and everything you're going to see here today is a culmination of the last two years of knowledge that we've shared together and with the community. So I think that's a great outcome that we're here doing that. So with any journey, let's get started. Ragtag. So the beginning of our journey, I want to call ragtag. Now our mission at this part of the journey was to deploy OpenStack in the US. That was our mission. But how we did that was in a ragtag way. So what does ragtag really mean? It means you tie things together, there may not be connected, there may not be relationships, but you build a picture out of chaos essentially. So the first part of our journey is ragtag. Let's dig into what that really meant. So when you start your OpenStack journey, what are you really trying to do? All you want to do is actually install and get OpenStack running and you think, you know, that's it, mission accomplished, right? But I'm going to challenge you that it's a little bit deeper than that. So you go to the market, you look at how you install OpenStack and you get a myriad of deployment tools, Ansible, Papa, triple O, Fuel, just to name a few. And really, how do you make a decision at this point having never run OpenStack of how you make the decision I want to choose? So essentially I just summed it up and you just go with the gut. You take some tooling that you know or your team knows and you actually just start from there and you implement that tooling, right? So that's the best shop that you can get. You click the button, you do the deployment. This is what happens. You don't know that this happens, right? It's magic. That's the beauty of an deployment tool. It should take care of the magic for you. But I'm not here to illustrate what this diagram is. I'm just here to illustrate that it actually lays down a complex framework of infrastructure, right? Which is what the deployment tool needs to do. But this infrastructure is generic. It may not be suited to your needs. As Adam said earlier, you watch a use case. So it's a generic. It may not even be ready for production, but you're not really sure of that right now. Your goal is installing and getting OpenStack running and you've met that goal, right? So let's dig into that a little bit. So when we take a look at deployment tools, they're really good at deploying, but they're one shop in this day and age. You click them once, you get OpenStack free. You click that again and you blow up what you just built. So that is kind of a dangerous state of affairs. And also, how do you actually manage what you just deployed? So everything that's out there, how do you actually run up on day two? It's an interesting predicament. So you actually find at this point in the ragtag journey that what you thought was the end was only the beginning and deploying OpenStack is actually the easy part. So if we drill down into day two operations, what does that look like? You come in day two. You've got OpenStack up. You said that the company were done. You get an email. There's some security patch. Can you please deploy it to your OpenStack infrastructure? Okay, great. How do I do that? So this is day two operations, backups, monitoring, documentation. All these kind of things are not serviced by the deployment tool. And this is the shocking realization of after day one. So if I just summarize really what the end of our ragtag journey would be, it's that our expectation was that the deployment tool managed hot. But the reality is that the deployment tool does not do hot. It just is really good at doing deployments. So then what happens? You've got a bunch of smart guys and when you have a bunch of smart guys and girls in the room, they all sit there and they go, we can build something ourselves. So the ops team start building the tool to do the day two operations. And that looks like a hack of scripts, manual hacks, and you build this tribal knowledge. So basically you have a team of people who each know a little bit about the puzzle and haven't forbid one of them should leave because you lose a piece of knowledge in your open stack infrastructure. So to close out, ragtag does not equal your ideal workflow, right? So you need to level up. How do we bring operational structures to open stack? We want to deploy this and make it maintainable and scalable. So this is the next part of our journey. Let's look at ops. So what do we need to do to get from ragtag to ops? Let's define that. We sat down and we said, what do we want? We want something that is repeatable. We want a single source of truth. These are all best practices in ops. We want some kind of place where we can put all the knowledge. We want it to be versioned. We want a cloud that just gives us best practices, right? There are over 700 configuration options in Nova. We probably only need to touch four. Just give us the four we need to touch. We want to be able to leave a personal experience in the cloud, right? We want our fingerprint to say, this is my cloud. I'm proud of what I built. So we want that flexibility of the whole workflow to leave our fingerprint. So you're left again. How do we deploy open stack now? You go back out. Is there anything to do this? Should we build our own again? Hang on. We've done this before. No. So we're pleasantly surprised because we have this massive community here. Surely this is a common problem. How can we actually get to this kind of workflow? Can we take a look at the community? So I was pleasantly surprised when I came across the OpenStack Solve project. Jakub, can you tell us a little bit about this? OpenStack Solve is a project which started development like one year or one and a half year ago. And now it's officially under BigTent from the main this year. And it covers exactly everything that you mentioned. So it covers mostly operation and lifecycle management, upgrades. And it's not just about configuration management tool, but about workflow which we are trying to explain here. And you will see in the second part of this presentation. Fantastic. Thanks, Jakub. And that's what we're really excited about, that it wasn't just deployment, it was workflow. So this project was also trying to do the day two and solve the day two operations in the single platform. So if we take a look at the outcome of our RAG tag and operations journey, our US data center actually looked like patching was mixed set of scripts, upgrades was not possible with the deployment tool. And worst of all, it wasn't scalable and repeatable. So when we went on the second pass to build our EU data center, and we tried with the second model, this is what it looked like. So we had service oriented, we had it automated, it was backed up. It was all audit evolved through a Git workflow. So we had the visibility and we'd actually achieved all those operational paradigms that we set out and that's a part of the goal. But as engineers, it's never enough. We take a look at what is still laying down. And even though this is prod ready, it's modular, it's there ready for us to go. We still had 23 VMs. So I still felt we were treating the infrastructure as, you know, the operational thing as an infrastructure. So we were looking at it through an operational eyes. Is it really infrastructure or is it an application? That's what we were left questioning. How could we make this better? So we get to DevOps, right? Everybody knows DevOps, what it means is varied. But we wanted to get to the place where we could make this maintainable and scalable at a larger scale. So what does DevOps look like from us coming from Ops? What do we want out of DevOps? We wanted just to look at OpenStack as a set of applications. So let's start treating it as such. We know how to deploy apps. How do we deploy apps? Let's make them immutable, composable, reusable. Let's break the pieces down. Let's not see things as VMs. Let's see things as applications and services. So this was a new paradigm for us. And we know how to deploy apps. How do we deploy apps in a microservice architecture? We can deploy them in containers. So we took a look at actually what we could do with containers and we were pleasantly surprised that we could reuse OpenStack Sol to deliver OpenStack in containers and actually meet the needs of this DevOps cycle. So what I'm about to show, and Yakub's going to show a really cool demo. I'm very excited. Hold on to your seats because you might want to go on the way. Yakub, take it away. Yes. So what I'm going to demonstrate right here now is how we are deploying the OpenStack in the containers. And what we did, we sit together I think two weeks ago and we sit together with Lucky and my team and we realized how we could raise existing solution to get them into the containers and then develop again another tooling and then maintaining the, let's say, like a server to machine world and the new one with different tools. So we do this amazing repo which is just very easily built by one script all the containers, all the images, and you can launch it on your laptop. But this is for a laptop, just deployment tool. But we put it together and what I'm going to demonstrate right here today is how we put it into the production, how to maintain it, how to scale it, how to get it into containers. So we actually deploy OpenStack inside of the Kubernetes. So let me explain you what I hear. It will be a little bit technical, so for some people it's difficult to understand but Lucky will try to translate it. So you'll do it. So I have my Kubernetes cluster which is basically five bare metal machines in our data center and I divided them like three for OpenStack control services, like control line, and two for the compute nodes. And what I have here now, now it's completely empty, so there is nothing. So you're starting from scratch here? Yeah, I'm starting from scratch. I have no deployment, no plots, nothing. So what I will do now is like launch the support services. So what do you define as support services? I just launch the services that you need to do when you want to run the OpenStack. So I launch the memcache storage nodes, MySQL database, RabbitMQ messaging, so all the support services that you need. And you can see that it's really, really fast. And actually I have deployment with one instance and now it's available in five seconds. I have running database and in about 35 seconds I have prepared my database and all support cluster running there. 35 seconds is the new baseline. Now, because we haven't been running, it should be like 30 seconds. We can create the OpenStack part. So now I launch the OpenStack and because our standard part of Enterprise Ray Solution is SDN as well, I started OpenContrail, which is Neutron plug-in for the SDN. And now I have the deployments. So not only have you deployed OpenStack, you've deployed an SDN as well? Yes, exactly. So I have standard services like Keystone Plant, Sinter and then I have the OpenContrail. So it's running. I can see the deployments as well as the port. What's the part? Port is actually, in some cases, docker container. So in memcache, for example, it's one single instance of docker container. In case of Nova, you can see that it should be six, it's like each container for each service, the API, Nova schedule, Nova conductor. It's crashing because it takes like one minute. It's not expecting to start so fast. Yeah, it needs one minute to run some time. Or the services get up and running. And now we can actually say, Jacob, you know me, I'm a show-me-the-money guy. Prove to me that this thing worked. So as well as the deployments, I have services. And services, please, are actually the endpoints. So we're running each service on which port. So in case of Keystone, it's this IP address on this port. So I prepare... So what are they? They're like web balancers? Yeah, it's automatically load balancer. Each service balancing on each container. Okay. So we can check my Keystone RC file for the managing of OpenStack. I prepared. So I have same address like for Keystone. I have here. So I can source my Keystone RC file. And let's try the Keystone user list. Okay. So I have now... Keystone is running? Yeah, Keystone is running with all the users, with all the tenants, with all the endpoints. What about Glantz? Glantz API running. So let's try the cinder. Running. And you'd like to create something to prove this is not money for money. Let's show the SDN as well. Yeah, let's do it with the SDN. Yeah, so let's create some network. All right. Yeah, so... Okay, so OpenStack Day programming. Fantastic. Okay. Yeah, so... Well, let me see. I see that everything's one. How can that be redundant? Yeah. Do anything about that? Actually, yes. Because now I'm running only one instance. One container is ready. Yeah. So how's the service? So how's the availability? How's the scale? Yeah, so let's... Let's run the scale. And again... OpenStack and Replicas number three. So let's scale on three because from each... It's going to take a while, right? I don't think so. Okay. Yeah, so I scaled. Okay. And you can see that now I have three instance from each. Yeah. Like in several seconds. Okay. Yeah. Okay, you got me now. But you need to get a little bit better with this. Yeah. I know what you mean. We can... I'm really nervous. It's still starting. Okay. So let's run the open compute node side. Yeah. Because we forgot to start the OpenStack compute. So we actually run in Glippvirt and another compute in containers as well. Okay. On my list. And now... Looks like node is never running. Noa, come on. Yeah. So we have no... So we got the full contingent there now. But you know what? If we're going to say that this is enterprise ready. Can you upgrade this? Yes. Okay. Show me that. Come on. Now it's like... Because enterprises need to be upgraded, you know that? Let's check the version of Neutron. Now we have, I think, Kilo. So let's jump into the container and check the version. Like Neutron server version. Yeah. Okay. So we're on Kilo? Yeah. This is Kilo stable version. And now... How do you actually do an upgrade? Show me that. This is going to be complex, right? So I have my salt master, which is orchestrated node, which deploy all the stuff. And what I have to do is like change this single line from Kilo to Liberty. And then number of replicas, because we are now running free. Yeah, we never done that. So this is my... Usually you are going through the git versioning system. I would modify it. Okay. So what we've just done is just add the salt to perform an upgrade. Let's run on the Kubernetes master. What did you just do? I just, because Kubernetes use the manifests. And the manifest are the launch. And we are generating the manifest, because otherwise I have to go into each manifest, change each single version also. It's automated. It's automated. You can see that I change my Neutron deployment definition from replicas one to replica three to change the Docker image version from Kilo to Liberty. And now let's upgrade. So... Okay. Apply my stuff. Should we go get a coffee now? Yes. Clarice, this is going to be quick. What did you do just here? Yeah, I just launched the watch and we can see how the containers are replaced. Lifetime, the bots, how the bots are... So it's actually keeping the service available as it doesn't upgrade. All the time. So it's just one by one switching the containers and location and it takes like, I don't know, 40 seconds to... To complete raw amount. Yeah. Should end up with this three of everything available. Yeah. Everything is available. So let's just hold on. And... Can you show me a look at the parts? What's happening over there? Yeah. On both sides you can see actually how we're dominating the old one instances and starting the new one. And if we change the deployments again... It looks like we're almost normalized. Well, normalized. So it shows. Works. Okay. And now... Show me that Neutron's still got that network it created. We're up. We're up. We're up. We're ready. This is Liberty and now proof that I'm not lying that it was really upgraded so nobody plays like disabled and enabled. Okay. So it's still got the OpenStack day pregnant, what? Yeah. Before. So just show me the deployment again. So what we've done is just created a production ready OpenStack. Yes. And upgraded it in eight minutes. Yeah. So that have been done before. I think that's the first. Fantastic. Thank you very much. You're welcome. To break down the barriers to enterprise and options. So we're really going after making this easy for enterprises to consume and operate. And the best thing is we actually collaborated under an OpenStack project. So I think this is a great result. If you want to learn a little bit more, there's a deep dive session in exactly how we built this. And in a few weeks, we're going to make it all open source so that you can start getting work up and running, solve your dev problems and your production problems and really start getting your hands dirty. So I invite you to attend that session. But now I open up the floor to any questions. I'll be around all day. So feel free to ask us anything. We're really excited to share this with you. Any questions? The gentleman over here. So the question was we're using an orchestrator to manage another orchestrator. We actually need the single source also through. So usually if deployment is Git repository, which YAML files, which defines everything. Which defines even Bermantle's infrastructure because we need to deploy Bermantle, deploy Kubernetes as well, and then manifest for Kubernetes. So we are using OpenStack Solve for delivering everything. I think a little bit more to dig into your question there. An orchestrator to deliver an orchestrator. So it was about breaking out the underlying infrastructure from the services that run on top. And what we were trying to get to was actually to be able to lay down that infrastructure much faster. And we used Kubernetes, and we built Kubernetes under Solve as well, so that you can actually have the ability to lay down OpenStack much, much quicker. The gentleman up the back here. How do you handle database negation with multiple replicas? The question was how do we handle database migration with multiple replicas? So if you've actually looked at actually even into this. SyncDB runs in every app in a loop. I don't know if you've ever looked at the code, but it's actually saying, is the schema correct? Is the schema correct? So it's running that in a loop. But if you wanted to talk about... I think that the question is towards to Galera cluster and stuff like that. We have single instance of MySQL. Yes, because we're still in production. We can federate the stuff like running databases in VMs, and the rest run in the containers. But it works in Kubernetes as well. Yeah, even though we showed that everything was in Kubernetes today, there are still elements that we wanted to keep in VMs, just stability reasons right now, like our, you know, our data store Galera in OpenStack. So we could actually pick and choose and say, hey, we want the front end, no over glance applications to be in containers, but we can leave Galera and maybe some of the messaging still in VMs using them. You can pick and choose. Any other questions? I have a question. No, I'll be sure how you look for it. How did you come up with this? Yeah, well, so I invite you to come to that session this afternoon because we're going to actually show you how you can develop on this platform, on your laptop, on the train, on the way to work, and actually go all out the same from that single source of truth of production infrastructure. So I think I invite you to come to that session and actually get your hands dirty. Any other questions? Fantastic. Thank you for having us. It was really great to be here.