 should we start? So I'm here to talk about multicloud CICD with OpenStack and Kubernetes. Before I start with the content, I present myself. I'm Maxime. I'm a cloud consultant. I help people with their public and private cloud projects. Usually a combination of OpenStack, Kubernetes, CIF and CICD to put it all together. This is what I do. That's enough for me. We can get into the interesting stuff. So the talk is about multicloud. First, cover a little bit about what I mean by multicloud, maybe different people in different things, and why you might be interested in doing multicloud. One of the main motivation is, one of the main thing of multicloud is to run in several clouds, right? So instead of running in a single provider to run into several clouds, possibly provided by different companies. So that's what I mean by multicloud, kind of loosely defined thing. Why do you want to do that? In a lot of cases, it's about resiliency. You want to cover situations where one cloud has issues and you want to still have your application or your workloads to work. So you have this second cloud to third clouds or other clouds to cover for that. So there's a big resiliency aspect with the multicloud. There are also vendor lock-ins considerations. Maybe you don't want to put all your eggs in one basket. You might have a great relationship with one company, but who knows, maybe they will get acquired by a competitor or maybe they will just go out of business or something like this. So it's nice to be prepared and already have other cloud infrastructure in place so you can just flip a switch and move on. So there's that. There's also the cost considerations. Maybe that some providers all of a sudden decide to increase prices or they do something like this. Well, you might want to leverage other cloud providers kind of on the fly without too much hassle, without a huge migration from cloud A to cloud B. That cost aspect ties a little bit with hybrid cloud aspects. Lots of people want to do a private cloud workload, a private cloud deployment that handles your baseline workload and then cloud bursting into a public cloud provider to handle those burst. That way you can be very cost effective on your baseline load and handle seasonal traffic, might be holiday season, something like this, into a public cloud provider that has more capacity to handle those short-lived bursts. That's definitely a cost motivation. It could be some things around features. Maybe your different cloud providers have different features. Some of them they have a Magnum, some of them they have Octavia, some of them they have this and that and you maybe don't find the cloud provider that has all the things that you want, so maybe you need to mix and match a bit some things here, something there. We can also be some things around locations. The giants they have a set of locations and those locations they might fit you or they might not fit you. Maybe your application is very latency sensitive and you need to have something like very close to your end users. This is where having like edge computing situation or having some different locations can be interesting compared to what's offered by the big guys, let's say. That's that for the multi-cloud part. Multi-cloud CICD, why do we want to even do CICD? The main thing is to fail fast. We don't want to involve lots of work to realize that our new patch will fail. We want to get really quick feedback that there's something wrong here, maybe there's a typo somewhere, something like that. Things that don't work out. We didn't invest a lot of time involving the ops team or something like that. So it's really about that, like having automation in place so you can realize that. In a multi-cloud context it's a lot about the consistency as well that if you have lots of locations it's really important to have automation in place to make sure you're consistent across locations. If you have humans involved in setting those things you'll very likely end up in having slight differences a bit here. Another person did this other location and made the typo or a little fog or something and then you end up in situations where it's difficult to troubleshoot those differences and see it doesn't work and this other cloud region that we have. So that's the other motivation for CICD in the way it's important when you have lots of locations to leverage that. And then the last part of the title is OpenStack Incubarities. So OpenStack here will be the API driven infrastructure. If you want to do automation we need to have APIs or easy ways to automate things to set up the infrastructure. We can't rely on sending tickets to people to get things happening. So that's not going to work for the CIS. That's really important to have like an API driven infrastructure. OpenStack will provide the open infrastructure that will allow to not be dependent on a single organization like we are a community. So lots of organizations are involved and we are not depending on a single cloud provider. So that gives a flexibility around the residency and vendor lock-in. And also OpenStack has a huge marketplace of public cloud providers. There are around 60 agencies available listed on the OpenStack website for companies providing public cloud services. So most likely there is one that's close to you or has features that you might be interested in. And the Kubernetes part is to provide a container ecosystem to have something very developer centric. Developers will have, most likely, be familiar with Docker. So there's a good interaction there. Something to make the application deployment portable and be able to reproduce things fairly easily between different locations. We don't want to have problems with it. It works in a cloud but doesn't work in cloud B. So that's kind of important. That's like the double. And then like in the overview, visualization of this is that you'll have your users talking to your app and your app will consume Kubernetes and Kubernetes will consume OpenStack. So that's like really your data plane where your users will work. So when you have a request it goes kind of on the left. And on the right-hand side you have what I will call the control plane where your DevOps will work. They will talk to your CI and then the CI will make things happen in the data plane. Maybe they'll update some things in the application. Maybe it will do some things in Kubernetes and do some things in OpenStack. But it's not involved in every single request your users are making to really separate the control plane data plane. So app is business logic Kubernetes container platform and OpenStack the infrastructure as a service. That's really like the overview of the multi-cloud thing. To go a little bit more in detail into the architecture, how can we make this happen in real life? And I'm going to present like one way to do it. There are several ways to do it. And if you have another way that works best for you, that's fine. And we can talk about different options. So your users will open a browser and they'll try to reach application and the first thing they'll do they enter the URL. So that's kind of like the entry point of the whole thing. You're going to need some sort of DNS name. And from there your DNS name will need to be able to like load balance between your cloud regions. And that's what I call the global load balancing. You can have two options there. You could either go with a CDN and you might already be using a CDN and then it's probably easier. Or you could go with something at the DNS level to load balance between your regions. So there you can do some geo routing with either route 53 or DIN I think. Some DIY scripts that would update your DNS records dynamically may be based on some monitoring information. Or you could do some dead simple round robin DNS. Kind of different options there for the global load balancing. Then once we are load balanced we are hitting apps. So those are the things you are developing. And here the app needs to have certain characteristics for multi cloud thing. Multi cloud to work has to be kind of a 12 factor app. So there's like a whole philosophy there. The app has to be dockerized. Otherwise you won't be able to run that in Kubernetes. So maybe a little bit obvious. And the application has to be somewhat HTTP based. Otherwise you will have difficulties with your CDN providers. And of course the application has to be distributed. So it has to be able to run into an active, active mode. That's really important. Otherwise you'll have consistency issues. So kind of the requirements in the application. It doesn't work for everything. What I'm showing here. But works for some cases. And probably good enough. So we have the app at the top. Then we have Kubernetes underneath. Kubernetes is like the cloud abstraction layer. It will help to simplify or make one API that abstracts the infrastructure and makes something more developer centric to consume. What's really important here is to do one cluster per location. Not to do one giant cluster across different regions. This is really important because you remember we have this like resiliency goals. So we don't want to stretch our failure domain across the globe. So it's really important to do one cluster per location. It's really important there. And the Kubernetes front will use standard Kubernetes constructs. Things like deployment services, ingress. So the ingress is the way how the communication from outside happens, right? And then we'll talk federation in the next slide. So Kubernetes federation. You could think about two different things when you talk about federation in Kubernetes. It depends a little bit if you come from the open stack background or if you come from a Kubernetes background. If you come from an open stack background, you are probably thinking something like Keystone federation where you have one set of credentials that works in different clusters or on different regions. So this is possible in Kubernetes. You can do it either with an open ID connect or a webhook. And the common way to do it is an OIDC. So open ID connect. You have two parameters you set on your API servers, this OTH URL and client ID. Then it will verify the signatures of the tokens with the IDP or the identity provider you have selected. So common ones or popular ones like GitHub, GitLab, Google, all those things are open ID connect compatible. And they work out of the box with Kubernetes federation, Kubernetes authentication like that. On the client side, you need to set some parameters of course to say who you are, right? And there you have, either you do it manually, there are a set of Qubectl commands that you need to set. Or there's like a UI you can use that generates the config file and you just drop it in and it works and you can access cluster A and cluster B with the same set of credentials which makes administration very easy. So that's one thing you couldn't mean with Kubernetes federation. The other thing is what they call Qubed Fed or Kubernetes at the beginning. And there the idea is one API to rule them all. So you might, you'll have one API that manages different clusters, Kubernetes clusters. And there there was like two different iterations. They started with V1, Qubed V1 that was kind of discontinued. And they are working on Qubed Fed V2 that is a work in progress. I think they released 0.03 recently. V2 requires 1.11 and should be coming out in beta somewhere in Q4 and in GA some time in the future. So there we are a bit stuck in limbo. Like if we want to have something like production ready or like long term, we are a bit like, okay, what do we do? Do we use the discontinued thing or do we do the development thing? So there it's a bit up to you to see what you want to do. Is it that you're in early stages of your project? Maybe you do V2 or maybe you do something DIY. This is always something you can do to fill the gap in the meanwhile. So this is what I mean with the Kubernetes federation. You know, the situation is what it is. And supporting all of that, we have of course OpenStack providing the infrastructure. And they're in day to day terms that's like instances, security groups, key pairs, all these kind of things. Floating IPs and networks. In each OpenStack instance, our regions will be completely independent. So that's why it's important to have one cluster, one Kubernetes cluster per location and not stretched. Otherwise we would really break the isolation models. So that's that for the architecture. I'll talk a little bit about the tooling that's necessary to make all of that happen. There are like lots of moving parts. So let's explore that a little bit. I'll start from the bottom, like the infrastructure. There you have three popular options to set up your OpenStack VMs and all that. You could start very old start with the heat, which is the OpenStack native project. And that works great, but it's OpenStack only, right? So if you want to do multi-cloud with some OpenStack cloud and some non-OpenStack cloud, maybe heat is not the best for that because heat will not work in AWS or Google Cloud. Heat has a little bit of a smaller ecosystem. It's difficult to find on GitHub. Heat templates that are generic enough that they will work in different clouds. So that's really a challenge there. If you have like expertise in house to use heat, go for it. If you are OpenStack only deployment, you know, this is really a good tool, but it's not for everybody. Another tool that you could use is Ansible. Ansible does lots of stuff. Here I'm really talking about the cloud modules in Ansible, OS on the server, for instance, that helps you to pop up VMs in OpenStack. Ansible has support for more than just OpenStack, AWS, Google Clouds, VMware, stuff like that. So you can really do cross, like, multi-cloud across different providers. That can be interesting. Ansible is a popular tool. So you might already know about it. You might already have lots of expertise in there. So it's not a bad choice. And finally, popular choice is Terraform. And Terraform compared to Ansible that does lots of stuff in cloud modules is one part of it. Ansible Terraform is really focused on infrastructure as code. So this is really the focus of Terraform. And as such, it has a bit like more advanced features that you might want, like drive-run capabilities. This is what I'm about to do. Do you confirm or not? It has support for lots and lots of platforms and really exotic stuff. This is something to consider if you're planning to use some exotic platform that are not supported elsewhere. So pick your poison, whatever fits you most or best. Just be aware that there are pros and cons in each categories. Now that we have our infrastructure up, we're going to need to install communities on top of that, right? Like for the infrastructure, you have the OpenStack native project for that. That's Magnum. And so it's OpenStack only as well. So if you are planning to do Google Cloud, it's not going to be able to use Magnum in non-OpenStack Clouds. And the ecosystem is growing, but it's early stages. It doesn't have Ansible Cloud module, I think, for instance. So maybe it will come. Maybe there's a pull request in progress for it, but it's not... It's still... Not all the tooling is not there. So it's something to consider. If you're thinking to do AWS plus OpenStack, what you could do is use Magnum for your OpenStack side and then Cops for the AWS side, since Cops is an AWS-only tool. So that's one option as well. You could mix tools to make it work. Rancher is another option. That's, I think, Google Cloud Azure. And AWS doesn't really have OpenStack support out of the box. So that's another option if you want to do multi-platform, multi-vander solutions. And if you want to do something completely agnostic, you have CubeSpray that really doesn't care what your provider is. It even works on bare metal in VMware, so it doesn't really... It's really portable. And CubeSpray, that's a set of Ansible playbooks that just deploy Kubernetes, and it comes with a set of terraform recipes that you can use for OpenStack. So that's kind of neat. You don't need to know too much about terraform to get it working. The slides are online, so I put the QR code at the end if you want to, like, check stuff, and this will even take photos. All right, so now it's part where I do the demos. I hope everything will work out. First, a few words about the demo setup so you can understand more what's happening. So I talked a lot about different options, and I tried to keep things simple, so I said, let's take DNS round robin. This is a very basic, very simple setup. CubeSpray, to have something really portable. I've used it before, so familiar with it as well. And I'm using the terraform recipes that are built in CubeSpray to deploy into OpenStack clouds. On the CI front, I'm using GitLab CI, and GitLab CI is a bit... It's like OpenCore. It's not like fully open source, but you could use whatever CI tool you want. You could use Zool or Jenkins, if you like. Basically, it's just Docker build, the same install and some wrapping around it. This is really the essential part of it. The demo runs on 36 regions now. I did one just at lunch, so that's cool. It's really quick to pop up in your region. And the cloud providers, they are running all sorts of different flavors of OpenStack from Havana to Rocky. So in terms of resiliency, this is kind of interesting. If there's a bug in OpenStack that affects you, it might not be affecting all the versions, for instance. In terms of resiliency, this is kind of cool. All the source code for the demo is available at the link below. It's all open source. You can just git clone it and stuff, if you want. So the demo runs on lots of regions, and I really have to thanks all the cloud providers that participated. Yeah, so a big thank you to all the cloud providers. And yeah, I'll run the demo then. So this is a GitLab project that has the GitLab group that has all the project. There are basically four projects. There is one called App. That's just the Hello World application that I'm using for the demo. The one called Clusters. This manages the life cycle of all the Kubernetes clusters, and the rest is just helpers, like Docker Image to make things a little bit faster and HelmChart to package the application. So really the important stuff is those two. I'll open the application, and what I will do is show you what the application does. So the application is already deployed. It's said like it's Hello World basically, and it says right now we're hitting the Dallas server, and we have a photo of that place. So if I open another one, we're going to hit Dallas again. Now we're hitting Stoodguard. So I have to close the browser and open again. Otherwise there's some caching happening for a few minutes in the web browser, and I don't want that to happen. So I have to show what's going on. Now we're hitting Dubai, and I think you get the idea, right? So let's say hypothetically I'm a developer, and I want to update my application, and so I'm saying Hello World. I want to say Hello OpenStack Summit or something like that. So I'm going to just make a commit into the repo. Let's edit that. Let's say OpenStack Summit. And right now I'm going to commit into master, but you shouldn't do that. You should really use stopping branches, but I don't really have time to like go through the whole workflow, right? This is like standard Git workflow. So let's commit. Then we'll go check the pipelines, which is the CI part of GitLab, and we see that there's a new pipeline going. And we'll take a few seconds. So the pipeline setup is we build the application, we run some tests, and then we send it to production. It's a very simple pipeline. I don't have time for like a giant pipeline for the demo. I'd always be sitting here for one hour. So we have basically built those Docker build, and then Docker pushed registry, and test runs a Herokuish test to check that the application is listening and stuff like that. And once that's done, it will go into the production phase, which will deploy the application. I made a little bit of a visualization app for that. So this is like the map of the world, obviously. And each dot represents one cluster and charges color based on status of both the clusters. So the Kubernetes cluster status and the deployment status of the application, right? So as soon as the tests and the build are passed, it will start to become blue and start to deploy the application. This takes a little while, so I will keep that on the side, and I'll continue with a demo of the CUBE CTL and the Kubernetes Federation, right? So let's focus on Europe, and then I have the terminal there. So in the clusters repo, I have a folder called Kuberos. You can just do Kuberos at the stage. You launch that, and then you have a URL that you can access. And here we are redirected to our OpenID provider, which in my demo is gitlab.com. It would be whatever your company uses. Yeah, I'm authenticated. I'm already logged in, so otherwise it will ask for credentials. I click download config file. Then I'll move the config file in place. So download CUBE CFV into dotcube slash config. This is explained in the readme file if you have any, if you wonder. And then we can do CUBE CTL config, get context, and it will list all our Kubernetes clusters that are available to us. So then maybe we want to see what's going on in Milano cluster, let's say. So we'll get pods into the namespace called app and then dash dash context Milano. And then we see the upgrade or the update of the application is ongoing. We are terminating a pod and we are creating new containers. Maybe we want to check in Berlin cluster, in Berlin cluster things are there. And we can just change that flag like this. We don't need to re-log in. Everything is there to start with. So that's kind of a need from a developer perspective. You can see the status of the app, check the logs, or whatever. Let's say you want to, you're a bad person and you're like, oh, let's try to delete that pod. Delete pod like this, but there are earn back rules that in this example make it so I have read on the access to the API only and maybe some admin people have full access or something like that. So we have the deployment of the application still going on. It takes a little bit of time to update all the cluster that are far away. There's a little latency to reach Tokyo and places that are far away. The last part of the demo I want to show is, I will keep this pipeline open. It's the cluster's repo. It takes a little bit of time to load all the jobs in GitLab. The server is not really excised for all this kind of things. So I wanted to show the cluster's management CIA thing. So there's one job per cluster. You click play to deploy it and you click stop to destroy it. So this is kind of the last part of the demo I wanted to do. So let's say I open a thing. Now we are hitting Montreal 2 cluster. I'll destroy that cluster if I find it in the list. Montreal 2, I click stop and then it launches the work to destroy that cluster. I'll update the DNS record, wait for the customers or the sessions to move away from it, and then it destroys the Terraform destroy all the cluster and that's it. You can see Terraform is going now. It's going to wait 60 seconds for the DNS TTL to pass. Another map is all green. So everything is deployed and we can see if I refresh now we are like hello world of the stack submit. So that's like all the clusters updated. I will not wait for the DNS propagation to happen and we can move on to the conclusions. So wrap up about this is that QFed V2 is not there yet but it's coming so that would be much nicer than to do some CIA stuff to fix the defederation. And also that OpenStack interop is kind of hard. Each cloud provider on OpenStack has the OpenStack API, yes, but slightly different versions of it. Some people they are our neutron routers, some they don't, some they are our floating IPs, some they don't, some they have custom glance images, some they change the default usernames in the glance images, some they use the raw images, the VHD images, QCAR images. It's kind of a mess to sort out all those details and as a consumer you kind of have to think do I want to handle all those exceptions or do I just take a common denominator and go like that. So this is something you have to, each organization has to think about and do a decision there on there. I think that's it what I have. Thank you very much for the attention and we can take questions. So this is just a DIY thing I did, I hacked in HTML. It's a leaflet JS API and I added the coordinates, GPS coordinates of the cluster into a YAML file and mapped it there. Then it queries some API to get the status of different clusters and stuff like that. So it's a very simple thing. Maybe you want to use something more advanced like Grafana or something like that to do something proper. Any other questions? Yeah, go ahead. Yeah, like on the map you mean, how does the CI has permission to deploy the app? How does that work? So the access to the Kubernetes cluster that's done in GitLab, they have settings for that. You set the API endpoint, the certificate and the secret and it's passed as a config file or something in the CI job and that's it. I think you can do that in any CI system. You pass some environment variable to authenticate to your cluster and that's that. That's there for you. So the question is, do you use GitLab environments to configure things? Yes, you may do and we can see that somewhere here. It takes lots of environments to load so that's the thing. Sometimes it's fast, sometimes not. So you see all the environments like this and you can redeploy them and see the status, the last deployments and things like that. Yeah, that's it then, I guess.