 First of all, welcome to our talk, GitOps with Open Telecom Cloud, Terraform, Helm and August CD. We have to hurry up a little bit, we only have half an hour for a lot of topics. What will we show at best, how we help our customers in Open Telecom Cloud, that they can utilize very fast our platform with the Kubernetes deployment. Well, we will install all stuff with Terraform and then install on top of our Kubernetes cluster, August CD with the help of Helm, and well, that's all we need to reach a GitOps environment. My name is Uli Schneider, I'm customer success manager at Open Telecom Cloud and with me today is Victor. Yes, thank you very much. My name is Victor Getz, I'm the CTO, I'm this guy here and I'm founder of ETS Consulting, we provide cloud native solutions and cloud native consulting and today I want to just to show you our best practices stack and what made us like very successful in the last three years. So this is the agenda, short introduction into GitOps, then we show our big picture like how we are doing things in our GitOps approach, so like our company Secret you can call it, and it consists of Terraform and August CD, GitOps approach, but you can use also Flux CD and then later we have also some time for questions. So what is GitOps? GitOps is a standardized workflow for how to deploy, configure, monitor and update manage infrastructure as code, like long sentence, but in short you keep your state inside Git and this is the only point of truth. If you want to look a little bit deeper into what GitOps is, I recommend Weaveworks, they can read a little bit more about GitOps. But we want to do the real stuff, we want to make some coding session here. So now I present to you our architecture, how do we do things? And our big picture is like this, we use HashiCorp Terraform and what we do is Terraform, we set up our cloud infrastructure, our open stack, we use their open telecom cloud and we set up their stuff like Kubernetes cluster, load balancer and all the stuff based on Git and that we will do today also. Okay, now I have all the cloud stuff, I have my infrastructure stuff, but how do I get my services online? So how do I deploy some things? There is a GitOps tool which is called August CD, you can also use Flux if you want to, and August CD does pull deployment, that means it takes a look at some Git repository and we call this Git repository infrastructure charts repository and deploys everything what's inside there, we use this time hand and it installs the elastic stack and the Kafka stack then inside our Kubernetes cluster here, it's like super simple and straightforward. Okay, but then you are like, okay, what is with my Vue.js applications or my Kotlin application or my Python, how do I handle these things? It's also pretty simple. So what we do, we have a different repository regarding to the teams, maybe I fold some more repositories where we call it every time app charts repository and that means there is our business services inside Vue.js, Kotlin, everything like that. Because we just have half of our today, we will not cover this part, we will focus on the infrastructure stuff. So time to terraform. What we will do today is I show you our open source code, so basically all the code what we are having, what I will show you today is all open source, you can just look it up. And we have here a project factory for open telecom cloud, which you can choose your module for your clouds, what you want. You see here, there's multiple modules, for example, a database, a private DNS, you need maybe jump posts for security and stuff like that. And then you're just building like legal bricks, you're just taking the pieces together and then you have your cloud up and running. Here's also example, how to use that, the quick start. There's also some best practices, how you name things, architecture example, based on the terraform stuff, how we do this. And here are the current available modules. So and here's also a link, which is called OTC terraform template. This is like a blueprint for you. So when you're starting with terraform and open telecom cloud or open stack cloud, you can use such a blueprint to start very, very easily. And this is also what I will use now. And for the infrastructure charts, if you remember, we have some Kotlin services, Ingress controller, traffic and stuff like that. There we have a different blueprint, which you can also use. And then you can just combine them both together and then you're finished. So yeah, what we'll do today is let's take a look here. You see here, we have a completely open to completely empty open telecom cloud. And now we will get stuff done. That means we will deploy some things. So I used this template now from Github and I will go into open info dev. We will set up a dev stage, make terraform init, it initialize module. Basically, it didn't change almost anything here, almost nothing here. So terraform apply. And I need to apply it now because we need to install the node pools. We need to install all the elastic load balance and stuff like that. And we have religious overall. So that's why I'm applying it now. And now we will discuss what is done here and what kind of modules we are using. So we go back to our slides and now I give over to Uli. Well, in general, at first we need to install some typical services from our open telecom cloud. Of course, we need some network stuff. It's VPC, which is like an open stack, a router and a network. And we install subnet and we need an elastic IP that we can access the cluster from outside. We provision load balancer and the thing which takes around to provision eight until 10 minutes is our Kubernetes engine. It's called the CCE, the cloud container engine. In addition, we thought we use some additional functions. And of course, I forgot, even we install some additional services on our CCE. We install also directly some plugins. One plugin is for auto scaling, which allows our CCE instance doing scale out and also scaling. So we have the possibility that we do a node scaling, which is really important for most of customers, not only pod scaling, but also that your cluster have the chance to grow and scale in. So next slide, please. So we also use a private DNS zone inside for our services inside our VPC. And for accessing from outside, we also use a public DNS, both services from open telecom cloud. And for this showcase, we use also a bucket of our object storage service, where we store for this demonstration the secrets and use it for some injections to make this all possible in half an hour. Yeah, thank you very much, Uli. So, but how does it look like in the code, yeah? And let's take a look. So we see the cluster is still creating here. It will take like 10 minutes more. And that's why we jumped quickly through the stuff which we did here and how the templating looked like. So first, you have some environment variables. You need to access key, secret key, domain name and all the stuff. Here you can see somebody who's familiar with Sashikov vault. I use here Sashikov vault for the secret injection into environment variables. You can use also something different. We use every time the name is called context. So based on the context which you are on currently, maybe it's a customer name, the department or whatever. And we have here the stage name. And with such a setup, I can set up a cloud very, very easily and it can deploy a lot of resources. And this is not some hello words thing. This is really what we use. And we do there like thousands of containers we are hosting with this stuff and we are really fast. We can just delete everything and boot up everything from scratch without any problems. And we have a very, very high uptime also. And the customers are very satisfied with this kind of setup. So here the upper part is for the infrastructure. And the lower part is for ArgoCity and for the Kubernetes. And yeah, let's just jump into the modules. So what kind of modules did they use? They used exactly the modules which Uli described because you need every time to have the same stuff. You need a Kubernetes cluster, private in, as public in, as and all this stuff. So you see here I use these modules here. And you can just choose whatever you want. So if you want to use maybe jump post additionally, you just add your jump post module. Victor, one question if I want to use, for example, on the database instance and the RDS service of the platform. So the only thing I need is this piece of LEGO brick, this piece of code inserted there. And I do not need to know the complete code of the X, X, and Z. It shows every time the LEGO brick what you want to have. For example, we have also FDS module, which is 4 square square or minus square. You can choose whatever you want there. What we do also here, we make a DNS entry. So we manage the DNS by Open Telecom Cloud. And you see here there's admin dot and then Sva context. Context name in this time is Open Infra because we are in Open Infra and the stage is depth. So our domain will be admin dot Open Infra minus depth and then this part. So, yeah, let's take a look. You see the cluster is still booting up. It takes a little bit of more time. So, yeah, we go quickly through the other modules then. We have here the last part is the encrypted secrets buckets, very, very easy. So we just save it inside the bucket which is based on S3 protocol and there we save all the secrets. Why do we need that? Because we split every time infrastructure from the Kubernetes part. So we have Terraform for Open Telecom Cloud for building up the infrastructure there, but at some point we need to switch over to Kubernetes. That's why we split it and this is some kind of mechanism. In the real production you would just use Sashikov vault or something like that. So what is in the Kubernetes part, what we do there? So we are reading from this bucket again and reading all the secrets which we need. And also we tackle the Docker IO problem with the pull rate limit. Maybe you are familiar with this problem. If you're booting up some services then you get into Docker rate limit and there use a very, very cool service. It's called registry crats. It injects your Docker hub secrets into every service account on the whole cluster. And so you don't need to worry about anymore about pull rate limits. And that's here where he is reading the environment variables and then creates and installs this registry crats. OK, but let's take a look what we need to, everything to have for AgroCD. So we switch back to the slides. So what do we need to install AgroCD? What we currently doing is we install the HEM CRDs, who's familiar with HEM, these are custom resource definitions. And if you're not doing that, you need to up to build up a dependency tree. If you have like thousands of services, they're all depending on each other. And so you need to make this tree somehow, makes your life pretty miserable. So that's why we installed the common and most used CRDs up front so that we don't have this dependency and everything can just spawn up and boot up as it would like to. Then we solve also the stock up rate limit problem, like I explained with the registry crats. And the last point is just deploy AgroCD. The deployment of AgroCD is very, very simple. We have there also Terraform provider. And the Terraform provider is this one here. This is the Bootstrap AgroCD. And you can see here that you need to provide some information there. But there's a whole documentation about that. So the important thing is just this two lines number of quote. So this is the URL, what we want to watch for, which gets through a repository. And below the part is what it will install. So everything on the stage, the stages, depth, infrastructure charts will be installed. And yeah, let's just jump into the different repository. And let's take a look what's inside there. So inside this repository, I dropped here some local hand charts because it's really easy to use the local hand chart. So you can just drop here any chart. For example, you drop your Grafana chart. And then you go to the value study animal here. And you just declare it like this. So Grafana, namespace, for example, monitoring. This is what I did today for the elastic stack. And then you make commit and push. And then Grafana will be automatically deployed. And this is really, really cool because you don't need to consider, yeah, who I need to give access from a Kube Citadel config, how to give to my developers to Kube Citadel config and all the stuff. Well, if we are finished, we will see all this stuff inside our Kubernetes cluster up and running. Exactly. If we do everything right, like this is like the hardcore stuff because we do everything live and it's not something mock up or something like that, we should see everything under our domain, under admin domain. And also, my code, which I'm using today, is also everything publicly available under my GitHub account. So yeah, let's talk about the services which we need. So we will install Agociti. But what will Agociti install then? So we need, if we have a customer and 90% of the cases you need every time the same stuff. You need some kind of certificate management. So for example, you need set manager with let's encrypt. You need some kind of routing, microservices routing. There is traffic absolutely cool for. So that's why we use today traffic. Then I chose also Kafka because we have like a lot of services which you can choose from, but I choose Kafka because three years ago, one DevOps guy told me Kafka setting up super difficult with ZooKeeper and everything like that. And so I was like, yeah, I will show today how easy it can be done. And so today we will then boot up Kafka. We will also use Elastic Stack with Kibana, Elastic Search and FileBeat for the logging and monitoring stuff. And we use also a simple example, some engine X page where we just have the links to all the services. And also what's very, very common, we need some kind of gateway. So you want to protect your cluster. You want to protect your domain. That's why in this case, we just choose a very simple basic house domain, basic house gateway, but in real production, you would use more like an OEDC proxy. That's what we are using. For example, with key clock or something like that. So these kinds of services we want to deploy and let's do it. Oh, clusters up and running, nice. So let's deploy also the other stuff. I go now to the... One question from Victor. So now we have done all the stuff done on open telecom cloud that we can continue with our deployment on top of open telecom. Exactly, exactly. Now we come to the Kubernetes part which has nothing to do anymore with open telecom cloud. So I make here Terraform in it, Terraform apply, and then it should deploy me Agocity. And Agocity will take a look at the Git repository and everything that's in the Git repository it will install them. You can see also the stuff which is installing here. For sure in real production environment, there's comes a lot of top... Also, you need pipelines for the Terraform you need, maybe Terra Grant and all the stuff. And all the testing and all the stuff. And the Hashikov vault, a secret injection should be used and not some kind of bugger. But in this case, we would just use the basics so you have something to work with. So it's now currently installing. In the meantime, where it's installing the services, I providing every time my customers some kind of shell script, like really old school. So we have here called shell helper. And there we have some functions which makes your life a little bit easier, for example, to get to Qteter, config and all stuff. So I source again my set info with environment variables and source then also my shell helper. And then I can just take a look if my cluster is working. So I say, get it be public IP. And I see my public IP from the low pin says already there. Okay, let's get to Qteter config. Get Qteter config, Qt control, get nodes. You see on the age, that is three minutes old. So it means that's our cluster currently. Let's take a look what's currently happening on our cluster, let's get parts minus a. You see August it is currently installing. It takes several minutes, maybe two. Oh, it's already up and running, cool. And you see August it is installed and now it will take a look at the other repository and will install everything. I made a watch now here and you can see here on the line here that the admin page is already loaded and some other stuff is also coming online. Yeah, so we wait a little bit. Like this is the cool thing. Sometimes you just need to wait in the DevOps department. Unfortunately, we don't have popcorn here, so, but. Yeah, you can try your Netflix. Netflix, yeah, yeah, yeah. It's my favorite on left side terraform and doing all the stuff right side Netflix. Very cool. And I'm still way much faster than over the UI. Oh, UI, good point. So let's refresh this page. Let's see if we have something here. You can see there is a cloud container engine. There is some volumes are already there. We have load balance on all the stuff and yeah, we have already a lot of things online. The question is now, does it work also with my domain and do I have a DNS entry? So that's why we're going back again. I will abort this one very shortly and say now NS look up. Now we get first the ELB public IP again. So this is our public IP and then we make it NS look up admin. What did we say? Open infra, def guardians of the OTC.com. It's matching. Cool. So let's take a look. Patient font, wait, wait, wait, wait. We have not the HTTPS. Sorry to disappoint you guys, but it should work. So the certificates not there yet because we have, we go to less encrypt and most probably the certificates not loaded yet. But it will come up shortly. Let's take a look what is happening currently in our CD. So we have also some convenient function here. You can also use it. Look it up in the August CD configuration and documentation. And here we use this one. So we have here the password. And then we go to admin. I look in here and you see already like this every time is this is beauty, you know, like this is time for you. You just deploy August CD and there's sometimes hundreds of containers online. This is joy. It's really cool. So, so, yeah guys, you thought we will not get the certificate but there we have a basic Ausgate way and we have also certificates. Cool. So let's take a look at all the stuff which we have in currently. So we see here the applications and you can see elastic stack is still booting up because elastic stack is based on J4M and we all know J4M needs a little bit of time. Yeah, so we need to wait a little bit. But Kafka is there and let's check just the URLs. So August CD was already working to URL. So we need Kafka, traffic dashboard, Kibana and the elastic search. So Kafka, you see two nodes that's already up and running. If you have a connector there we can just connect in right messages. If you want to get some more information about Kafka you can also look on our YouTube channel. We have also here to traffic routing. So these are all of our routes and yeah, they're working. Kibana is not up and running yet. You see, but you see there was four and a four not serves unavailable. It means the container is currently booting up but it's not meeting the health check yet. And elastic search most probably is also not up yet. So we will just wait a little bit and then it should be up and running. In the meantime, while it's now booting up there's all the stuff. I think we will go forward with slides and later we will check again if everything is up and running. So yeah, I hope you really have some takeaways for today. So yeah, what did we not cover today? For sure I made Terraform local because originally I thought I would get like one hour and I could show you way much more stuff but based on the half hour I need to make a Terraform local state. Normally you need to save the state of your cloud into encrypted bucket or somewhere else safe. There's multiple option how you can do that. And yeah, that's it. So basically we came to the end. So these are the links to all the stuff which I showed you today. I would be happy if you have also some architecture which you would like to discuss with me or something like that. I would love to get in and talk with you and chat with you a little bit because I really love architecture. I love OpenStack and also, yeah, Cloud Native. It's really cool. Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. I hope you have a lot of fun. See what we can reach in half an hour with doing some stuff of automation. Well, some additional information is the same barcode and you'll find all the links we used here in our OpenTelecom cloud community. You can also meet my colleagues at our booth at B1. It's outside in the front when you come in from outside. And, well, I would say thank you, Victor, again. A nice pleasure, good show again together. And I hope it was interesting and impressive for you. Thank you. Now we're finished. Are there any questions so far? Sorry, it was really, really fast, I need to say. If you want to have the slower version, it's on YouTube. But, yeah. Yeah, we have a version about one hour on the YouTube channel. You will also find it in the OTC community if you want to have it a little bit slower. No questions? Yeah, I have. When I do this in production, I want to put them not to git. No, so not my git pipeline. Or should I do it straight to the full C cloud? So when I run the vault command, you've got to pull the secrets from the HushCop vault. I want to do this from git pipeline or from inside of that C cloud. And then do it from there. I will repeat the question very shortly. So I used HushCop vault to get the environment variables. And basically, your question was how to do it within the pipelines? How to get the best approach? It depends on the case, I need to say. But what we are doing, we have a GitLab integration for, I think for HushCop vault, but I need to look it up. And basically, we have some token which runs out. And then GitLab gets every time a new token. And then the GitLab run gets every time a new token which runs out. And based on this token, we get the secrets. This is the approach with, you know. It's just an application. I need to repeat the question again. So when we use the Kafka, was it for AgroCD? Or was it just for demonstration? It was just for demonstration, yeah. So we don't need AgroCD. You can even supply Terraform through AgroCD, which is also a very interesting approach. You're welcome. Any more questions? OK. Then thank you very much. And see you. Bye. That was a very specific one. I think it was a very complicated one.