 All right, so next up is Francois here. He's going to talk us about somebody I was just talking to to come to this talk. They're trying to figure out how to do end-to-end testing in their environment. So preview environment sounds rather similar. So take it away, Francois. Thank you. Thank you. Test test, do you hear me all well? A bit of ASMR? Test test? No? It's fine. OK. Let's start with implementing previous environments with GitOps in Kubernetes. First about me, as my accent can show you, I'm French, but I'm working in Hamburg, Germany, with weather maybe a bit different from Spain, for the one that already visited the city. I work at DevOps Engineer at Remazing, which is a mid-sized company, but we have a very small team tech, like roughly 10 persons. So maybe not the same scale at some companies before. And on my free time, I like open source. So I joined the C-release team of Kubernetes for the last release. And for a summer, I will post a bit open source and maybe come back in the winter about the company so you can understand why we did preview environments. So it's an agency with a software that we build in-house. We sell to other companies. And this software is multi-tenant. And I will maybe refer it later during the talk as Remdash. And that's for this software that we built previous environments. So one year ago, not so long ago, we still used a basic workflow for our main software. So we have plenty of microservices, but this one had a simple Git flow. So we had a team of developers. And they are pushing to staging. And then you have everything deployed on staging environment. So it's just one branch staging, deployed, everything fine. And when you want to merge on production, we do it every Monday. We use a pre-request, a release, a hot fix. And we merge on production. It's deployed in production. It's quite nice. So imagine someone pushed a good commit. And you have maybe, I don't know, 10, 15 pre-request before this release day. And someone pushed a wrong commit. What will happen? It can seem maybe super simple. And the code looks great. So you have the approval. It's got merge on staging. But yeah, too bad. You couldn't test it very well. And staging is broken. So how do you deploy on Monday your features? Because you need to fix this first feature before merging on production. And yeah, it can happen with one pre-request, two pre-request, whatever. So let's see a quick list of pros and cons about this workflow. So the first one is quite easy to set up, right? So you have staging environments in your Bitbucket pipeline, YAML, or whatever you use for the pipeline. It's mostly co-by-paste. And then use a flag like production for the production. So super easy. And it's fast. Like if you want to deploy something in production, you just have to create one pre-request to staging and then one pre-request to production. And it's easily merged in production, let's say. But you have some cons that are added to those two nice pros. First one, it's hard to reuse and test. So let's say someone is going to merge something on staging. You want to review it. You will need to check out on the branch, pull the branch, test everything is working fine. Maybe you don't have the same environment. Or imagine someone is adding a Redis. You will need to set up a Redis in local. It's getting a bit messy. And it's hard to let's just review the pre-request or test something easily. Second, it's not the best for new joiners. So imagine you have new developers joining the team. They may want to push as much as possible new codes to test. OK, let's try to break it to see if it works and without breaking staging. Because staging, a lot of persons are working with staging or QA. But if you, let's say, work with previous environments, which will be a nefer environment aside, you can break it. It's meant to be break, which is very good. And as I said, other difficulties to test new infra. So let's say you want to test Redis or a mail server, whatever. It's hard to manually add it to staging without modifying the other features that are on your staging. And last one, as I said before, one pre-request, one feature, one branch, can break the whole staging, which will then you will need to fix this feature. Without it, you cannot merge to production. So it will slower your time to market. And in a startup environment, let's say time to market is very important because you want to be better as than the other tools. So that led us to this great schema. And this one is about previous environments and GitOps. So I will say there are two main features where GitOps is useful. If people there, you have other seats. I mean, if you don't want to stand. So about GitOps, you have two main features where you want to maybe use GitOps with previous environments. Maybe you deploy all your infrastructure or all the rest with GitOps principles. And it just co-bytes basically to deploy it in previous. The other part is maybe you want to move to GitOps and start to, I don't know, see how it works. And maybe you don't want to switch directly to production or staging. And previous environments is a step way just before staging or QA. So it allows you to test basically how GitOps works, different tools like operators, Flux, or Argo. And because it's a step before staging, it's very likely to break. So very good, because then you can test what happens if you have a drift and you need to reconciliate, you have resources breaking, maybe databases you manage. So I will say it's the best way to start GitOps. As you can see here, it's quite simple. I will go through all the steps, but it's easily like, often you push code to, like let's say developers, push code to a coder for stories. GitOps, in our case, it's BitBucket. And you will see in the following presentation that BitBucket has some flaws. Let's say that maybe you don't have with GitOps or Git Labs. So it involves a bit of bash script, spoil. And so you push your Docker image to a Docker registry, in our case, Docker Hub. And then in another step, you will just commit the environment, which is just a YAML file and a Docker tag to make it declarative and reproducible. And you commit this to another repository, ops repository, where you store all your infrastructure. And then an agent, let's say Argo, who just watch, think and deploy to changes to different namespaces. So in the end, if you have 10 requests, you have 10 namespaces in your community's cluster, accessible with different URLs. Yeah, disclaimer, with Argo CD, let's say, you have what's called Generators, ProQuest Generators, which is doing, like which is simplify most of the steps that I will talk about. But for BitBucket, it's not available yet. So yeah, let's dig a bit into this. First, some pitfalls I would like to new before implementing previous environment. First, GitHub is great when you want to, when you have a working state at the end, because everything is working smoothly. You can trace your commits. You can test, like it's super nice. It's, you feel shavely, let's say. But when you test it, it's a mess because then you need to push to build an image where everything is committed to the ops repo, so you lose time. So at the beginning, what I would advise is to use a tool like Scaffold that will compact all the steps, build, push, and deploy. In our case, it was just with Docker and Helm. And we compile everything of this to focus and intensify on maybe just how the deployment will work, how everything will fit for your previous environment. In our case, we deployed a Laravel application, which is a bit harder than Node.js, let's say, or Go to deploy because you have PHP, FPM, Nginx, and all the stuff. So Scaffold really saved a lot of time. Now let's dig a bit into the Bitbucket pipeline.tml. So for those of you that are not familiar with Bitbucket, you have one YAML file. And this YAML file, inside of it, you declare everything you want for your pipeline. And so in this case, you can say, okay, I want to listen to two branches. So all the branches that are matching feature and the other one fixed because in our case, we didn't want to deploy a previous environment on a very short lived branch like Hotfix or something. We want to be as fast as possible. And this one, you trigger two steps. As I said, you build and push to a remote registry, remote Docker registry, how to say. And the second one, GitHub CD step. So when you commit something to the ops repository. So yeah, a bit scary. So this one is super simple. Just you log into Docker, you build your image and you push it to remote registry. But because Bitbucket makes it a bit harder sometimes, there is one step that is good to take a look at is this one. So this one, you export the current tag where your pipeline is building to a batch script and you will source it later. Because otherwise, what can happen? Let's say you have a slow Docker image building or maybe your pipeline doesn't have a lot of RAM. You will build one Docker image and then you will go to the next step, the GitHub step, but then your head of the branch, the head of the branch will change. So with Bitbucket, with this simple like trick, let's say, you can be sure that the two steps are matching the same tag. Yeah, I think that's all we have to say about it. The second one, which is a mix between CI CD, it's about still Bitbucket. So a second step, GitOps. First one, we don't want to create a previous environment with master. So we just remove it at the beginning. As you can see here, let's highlight it. And then we import our environments. So we are sure to match the same tag as we had in the previous step. Otherwise, we will commit something to the ops repo and our goal will be like, wait, this tag doesn't exist and then you have your preview that crash. And then we just clone ops repo, add some SSH keys and commits the application YAML file. So the interesting part here is this one. In the code repo, in our case, we stored the HelmShot, we use Helm, inside our code repo. So any developer can modify the HelmShot. Let's say you work on a feature that will increase a lot of memory. You can modify the HelmShot, it will deploy and it will deploy the previous environment with, let's say, more memory. Our developer wants to add Redis for caching database queries. They can modify it directly on the code repo and they don't have to touch the infra repo. And for this, we just modify so a previous file that I will show you, I think next slide. And we inject like PRID, PR names, so we can build nice URLs. You can inject also stuff like the tag of the image for NPM, no, FPM and NGNX and also stuff like this. And then you just commit it and you push. Yeah, next slide. About the continuous deployment on AgroCD. So for AgroCD, we just install it in the cluster. I mean, you have plenty of documentation about doing it. And in our ops repository, we have one file, the one I told you about with the template. So we have environment variables that would be replaced with our current environment. And inside of it, you put everything you want, like the one which would be useful for URL. You can put other values file, let's say. Everything is customizable. And Agro will watch this file and deploy it accordingly. And as you can see here, we have the target revision, just here to help you a bit more. This one is set to the branch we are working on, so the pre-request, let's say. And so every change will be detected by Agro and Agro will reconciliate. And then why AgroCD? I feel like there is a big, yeah, there are other tools, but in our case, it was mainly, do we use Flux or AgroCD? We use Flux for everything related to infrastructure. So Prometheus, Grafana, Ingress Controller, Traffic, Set Manager, Cube, Silk Secrets, whatever. And it's nice, it works nicely. It's a CLI, I mean, I love it. But in a case of preview environments, you want to have developers easily debug what's wrong, what's nicely happening in your cluster. And sadly, beautifully, I don't know, but Agro is way more nicer. Like in one URL, you can see what's happening, how it works. So that's why we kept Flux for the infrastructure and AgroCD to deploy preview environments. About it, just quickly about the Flux structure. In our case, we have one folder per cluster. We have three, four clusters, so it works nicely. Maybe if you scale up, you want to change this way of working, but in our case, it's working super nicely. You have all your tools, like I said earlier, Ingress Controller, Silk Secrets, Monitoring, whatever. And then AgroCD and your applications will use all those custom resource definitions inside your charts or inside your customizations. So in AgroCD, we manage AgroCD itself by AgroCD in the apps of apps concept. And inside the previews on folder, that I can highlight here, that's where we commit all our previews of environments. So if you check master branch of AgroCD repository, you will see all the previews environment that are deployed, which make it easy to delete afterwards. When you close the branch, when you merge a PR, you just delete the file that is responsible for the previews environment. So as well for resources, if you see something, a file that got committed maybe two weeks ago, you may want to delete it. Why do we have a feature open for two weeks? We want to merge fast and to test features. Then the biggest, I would say, problem is the GitOps. When you start to use GitOps, you always have this way where you start to committing your secrets to a repository, hopefully a local, I mean, a private one that you just test stuff in. But then later, you want to just store your secrets. So let's say database credentials or anything like IPI keys, Docker Hub secrets to access your private registry. And in our case, we use Cilsecret. So it's super easy. You just install Cilsecret. And you have a small CLI to use it. And what you will commit is only the Cilsecret part, which is a CRD. And this will be encrypted with a key of our controller. So no one has access to this key except the controller. So you can give ops or admins access to this key. And only them can encrypt secrets. And so then you can commit safely to Git. It's the easiest way we've found. I know there are a vault or whatever that you can use. But as let's say, fast to implement, it was kind of very good. And then in your cluster, you have the secrets. Additional tip. If you use Helm, let's say, you can use conditions. So simple if. OK, if you are deploying, let's say, a production environment or you are testing with local, you can use a different secret file that you encrypted with maybe your local kind cluster key or your staging cluster or whatever, which makes it easier to test. Because then depending of the values file you use, you can use different secrets. You don't have to, every time, get the key of the controller and then encrypt again your secret, et cetera. Then let's talk about the multitenant. So for our applications at our company, RANDASH, we have a multitenant setup. And for every previous environment, so for every pre-request created, we want a URL. And for each URL we have, we want a client on top of it. So anyone can, you can test different clients if you see clients, you can test them easily. And it causes a bit of problems because at the beginning, we hard-coded on traffic, the wildcard. And then how do you do this? Do you modify the traffic manifest, which is at the core in another infra repo because it's just a tool that's using the cluster. It's not in the Rgo CD repo. So what we use is we combine traffic as ingress controller and self-manager, another tool, that will manage HTTPS. A great thing about self-manager, it will give you as well high availability because then traffic won't have to manage all the secrets and stuff. So those two working together are deployed with Flux. And I will show you next step. So that's how we manage, like, let's say, infrastructure to previous environment. So in one side, you have all the self-manager stuff. So you have the hand-represd story with CM for self-manager. So we install self-manager with Flux. We have our cluster issuer responsible for DNS, Wildcard, and our, in our case, digital account secret, but you can put any cloud provider, secret, Cloudflare, whatever. And with this, once it's set up, all the previous environment stuff, all the Rgo, will use those resources. So for each environment that you commit, you will have a certificate which, inside of it, the name of the URL you want to give it, then it will generate your Wildcard. And then ingress route, matching the secret created by this certificate. And you will have Wildcard on the flow. So for any progress created, you have Wildcard certificate, which is, I find, super nice. No need to SSH into the pods, install third-bots, whatever. Let's another challenge we had to go through, let's say, with our elastic search database is the hooks. We have Helm, and that's how we deployed at the beginning. And with Helm, you can easily match them with hooks on Rgo C. So hooks, we just basically say, OK, every time you reconciliate and you deploy something, or you, yeah, in post-insult mode, you would just run a job, for example. And then you can run long-running operations after each deployment. In our case, we want everything text-related to be indexed. And so it runs on the flow. It's super nice. And you can imagine, I don't know, applying to whatever you want. So if you want to seed new clients, new data, testing fake data, you can do it on every deployment. You can even maybe wipe the database and re-seed with random data. Everything makes it easy. Then about namespace, I don't know who is using Lens here, but then it's super nice because you have a dropdown, and you can see all the new namespaces in your Kubernetes cluster. But as well, when you do kubectl getNamespaces, after a while, imagine, I don't know, you release 20 for requests per week, or way more depending on the size of your organization, you will have a huge list of namespaces. And then you will start to grab into the namespaces to find a good one. It's a mess. So an easy way to do it with Argo, you can trade the namespace easily with adding just one option, but you cannot delete it easily. And so if you create a namespace resource, then Argo, when you delete the previous environment file, will go through all the resources, secrets, whatever, delete them, and then at the end, will delete the namespace itself, so you don't hang up with empty namespaces in your cluster. Now let's talk about developer experience because the main goal of previous environment is to improve, let's say, time to market, developer velocity, you want to push commits, like to test better features and to push more commits. So yeah, a bit of bash, not too hard though. Before deploying, we check if in our ops repository, there is an environment file, so as you can see first line. And if so, we just commit on the pre-request. So I'm sure there are other tools to notify which writes like, but the best is still on the pre-request, you left a comment, okay, this is the URL of your previous environment. So when a developer needs to test, they open the pre-request, they have their automated commit, they click on it, and they have their previous environment where they can do anything they want. They can break anything they want, slow down the release that will soon come to production. They can try to break it, and that's what we want. If the previous environment is broken, that's good because no clients, no one cares if the previous is broken, it's meant to be broken. Another one is the good thing with ArgoCity is the UI. So you want to explore this way, to give it access to your users or developers, I mean by users, I mean developers. So you can easily add an ingress route, let's say you have traffic, so you add an ingress route, you can secure it with passwords, you can use roles with ArgoCity, so you just want to give a read-only role to your developers so they can just see what's happening. And if something breaks, they can push some fixes, and they can just see in a nice way what's happening. And if you don't trust into having, let's say an operator such as ArgoCity that controls your cluster with an open URL, even behind a basic auth and passwords and roles, you can still keep forwarding and give access only to this forward and then your developers can access ArgoCity. Almost done with DX. So about the webbook, ArgoCity we reconciliate every X minute, so I don't know what's the default, five, two. But to speed up a bit the things, especially if your Docker build time is taking long, you can use what's called a webbook, so I guess everyone is familiar, and you can set it up with Bitbucket easily, and then every time you have a change on your Ops repo, it will just reconciliate the state and you will have faster deployments. And last, I just wanted to add a slide because it's useful. You can give some, say, bash commands to easily SSH into a pod to debug for the developers, or let's say to forward database connection, a mail server that will catch all the emails to test. It's kind of nice, a bit dirty, but it works well. And tips, depends of your maturity and security, but at the beginning we were like, let's say refusing routes to the containers, but then you cannot install packages. If you want to debug, you cannot like SSH and then start to create files, modifying stuff. So in the end for previous environments, not production, I think it's a good idea to just allow routes to debug, but that's a closed environment like, so it's fine. And last but not least, thanks to FlexDMaintenor, KingDone, I think, on the Slack, and if you have questions, Slack channels are great like CNCF or Kubernetes. He told me about this Renovabot, which is a live changer, honestly. So the problem with installing resources is over the time you have upgrade patches or security issues that happens, or maybe you will, I don't know, stay for one year to the same 0.5 versions and you want to upgrade automatically. And you can run this bot on Bitbucket. So it will just run, let's say, weekly, daily, depending of how you trigger your pipeline automatically and then create pull requests. You would say, okay, your, let's say, your low key instance need to be upgraded. You can change the change logs. Okay, you compare what's changing. Do I need to do something manual? Maybe remove some CRGs, I don't know. And then you just click Merge and you have up-to-date clusters and tools. So, great tool. And that was the last slide. So thanks for everyone to listening. Maybe it changed a bit of, I don't know, huge clusters and stuff, but that's how we do it in-house and previous environments really helped. Now you can really easily test features and with GitOps, it's a nice way to start using GitOps, let's say, before going into production. And maybe if something breaks, you don't want, I don't know, bad things to happen. And that's it. Thanks, especially GitOps and KubeCon. If you have any questions, let me know. Any questions? Also, if you have a chair that's open next to you, raise your hand. We have people standing over here. So you mentioned that you're using Flux CD for Infra and Argo CD for developers. Why that split? I understand the Argo CD part for the UI and everything, but then why Flux? So at the beginning, it was just a random choice, let's say. Let's speak Flux. It looks great. It's working well with Helm. So I started with Flux. And then when doing something for developers, I mean, CLI is not the best, like to see quickly a change. You need to learn a CLI tool and stuff. So the UI of Argo CD, I checked there was an old project from Flux to have a UI, but it doesn't seem like it's continued or well updated. Because it's like, Flux is doing production environment and it's working great. And I don't want to move to Argo CD yet. I mean, maybe in six months, one year, we are mature enough to migrate to Argo CD. But in an environment, let's say where it's not critical, it's not to test stuff. So you can both test something nice and at the same way, it's not critical. I don't want to move production environments or production resources on something just as a test. As a co-chair, I see a lot of people doing exactly that. Like their actual infrastructure is spun up with Flux, the applications are spun up with Argo. That's totally normal and fine to use both. You don't have to go all in on one. That's the beauty of Kubernetes and GitOps. Any other questions? Great talk. I'm just curious to see where application sets fits into all of the preview environment set up that you have. Have you considered it at all within the Argo ecosystem? Can you repeat just a little bit? Application sets. Yes. So for application sets, we don't use them because we don't need to, let's say, have the same setup. I mean, application set is still hard to say. It seems really good for if you want to generalize stuff and to maybe deploy to several clusters or stuff. But in our case, it's working great. And maybe we'll move to application set later. I saw, I mean, I wish that the Bitbucket is integrated into the ProQuest Generator, which will make listening on events way faster and better. And we can ditch the whole, I guess, commits part inside the pipeline. So yeah, future step. But so much to do, like, observability, monitoring. We need to prioritize. And yeah, I would say that's it. Maybe there was a question. Yeah, nice. Yeah, I just wanted to know, how long does it take on average to provision a new environment? And have you taken any shortcuts to bring that time down? So what's the longest part is the Docker build image. Because especially with Laval, it's kind of heavy. And you need to build engineering, PHP, FEM. We use mix. So you need to pull a part of the image in the back end. It's a bit of a mess. But to deploy the environment is super fast. Because we use some, let's say, subcharts, like for Redis or MySQL. So everything starts quite easily. It's not resource intensive, because it's just a Laval app. It's not like you start a huge primitive cluster or whatever. So it's super fast. I wouldn't say in minutes. But for the build time, it's roughly five minutes. And then to commit, Bitbucket is a bit slow, I will say. No offense if there are Bitbucket people here. But I don't know. It's a bit slow to get the cache instead of everything. But once it's committed to Argo, you have the webbook triggering. And then instantly, you see the synchronization starting refresh. And it's super nice to see how fast it is once it's in the repo. Awesome. Any other questions? OK. Thanks very much. Oh, there's one. Yeah, one question. So we also have a similar setup using flux, actually. In that case, and the biggest challenge currently we have is executing end-to-end tests for applications. How do you deal with that? So with end testing, we use Cypress. And Cypress uses just the same test as we have on staging. So we just run on the URL we have. And then you know Cypress, maybe? OK, so we're just doing end testing easily. And if you want to test on the Docker image, you would just add Docker build, Docker push. And in between, you just build it with the test tools and run the test. So if it breaks, it even breaks before deploying to the previous environment. So you have an alert, like, oh, the pipeline broke? Yeah. So you expect something like that? I mean, I guess you have several ways to trigger it. Maybe you can run your end-to-end testing as like a post-insult job, a end job, that will run some smoke testing and other test confirmance tests you want to have on your cluster. I mean, you have so many options you can run them in the pipeline, maybe. After it's got deployed, then you wait until the state is reconciliated and you deploy your test. Or the way I will use it is with the Helm, I mean, Argo CD hooks, and then you wait everything to be deployed and you start your testing. I don't know. Maybe one reason to use Argo for previews. I don't know. You have great tools. And just use what's, yeah, works the best, I would say. Thank you very much. Thank you, Francois.