 All right, I think we can start. Good afternoon. I hope you, hey. My name is Julien Stroker, working on Microsoft. I'm gonna try to, so I hope you're there for this session doing DevOps with DCOS. I'm gonna try to share for 45 minutes my journey around DevOps. So I used to work in the DevOps team inside Microsoft for three years now. I mean, I'm still working in this team. Doing a lot of open source technology, which is great. Microsoft pay me to do open source technologies. I have a lot of fun and I try to stay focused on DevOps practices. And when I'm seeing DevOps practices with open source, yes, we have like Microsoft products doing helping people doing DevOps. Maybe you heard about TFS, VSTS, and so on. But since I'm really focused on open source technology, I'm playing and I've been a lot of fun with the opposite, Jenkins, for example, and other open source technology, which is really great. So I'm gonna try to share a few practices. The idea is more of a challenge. During these 45 minutes, I'm gonna try to do just demos. And we're gonna try to do and implement the DevOps practices on a leader application, Java application that I'm using a lot. So I'm gonna explain that actually. So just the idea of that, it's 45 minutes doing only demos and we only code. So let's cross the finger. Let's see if it's gonna work. So quick agenda about that. So I don't know if you're about ACS, norways, guys, I'm not here. Yes, working for Microsoft and nothing to sell, right? I'm doing only technical stuff. I think I only have one marketing slide working for Microsoft. So I guess it's kind of monetary, no kidding. I'm just gonna explain ACS. Maybe you heard about that as the container services. It's just a quick, easy way to deploy like DCOS or Kubernetes or any container orchestrator on Azure. So I'm just gonna explain that for people, maybe never use it. And then we're gonna go deep on CI, CD, and bunch of diverse practices. Depends the time that we have. I'm gonna try to add more and more practices on top of that. Make sense? All right. By the way, as you can hear, I'm French. So sometimes people cannot understand me. Don't always just raise your hand and say, can you repeat that? Or maybe if I have questions or so, don't hesitate to ask questions. So yeah, that's the only marketing slide I have ACS on Azure. Azure Container Services. This is the thing, the only preparation that I made before the presentation is just to deploy a cluster. Just a few clicks. We, Microsoft, so we like to clicks and we like interface using the portal, right? And I deploy, obviously, a DCOS cluster. But you also have the ability to deploy swarm, Kubernetes. We just announced a new services for people using Microsoft called ACS, which is a Kubernetes manager on Azure. Really exciting, but that's not the focus today. That's just the thing. I'm going to show you right at this piece on the portal, how I did that. The application that I'm going to use today, it's an application called Partium Metanomrp. If you, did you read this book? Who read it? Okay, I really, really encourage you people to read it if you're interested about DevOps practices. And the cool thing about that one is not a, I'm not going to say a growing book, it's a novel, so it's really fictions. But it's really fun because they have all the cliché inside in terms of DevOps practices, you know, the dev guys confronting all the time, the ops guy and so on and so on. That's really great. And from that, we developed this application called Partium Metanomrp, referencing to the book. This application, so it's public, it's only using open source technology. So it's Java based application, using running Tomcat, using MongoDB and so on, right? So if you go there on the link on the repo, Microsoft repo, it's kind of a monolithic application. And what I try and what I try to do here, what I'm going to try to do actually, is to split it into kind of microservices because it's trendy here. So we're going to try to do microservices from that and then deploy on DCOS and do DevOps practices around that, right? So this is what I said, we're going to try to do DevOps, DevOps everywhere, right? During this application. So obviously a CI, which is like, pretty much the base about DevOps practices, continuously integrating new features and make sure that it's built and works all the time. Then I put that here, another thing on Azure called ACR, Azure Container Registry. I don't know if you heard about that one. It just basically the ability to have a private registry for Docker image, pass base, right? On Azure, it's really simple. Again, few clicks and boom, you have it. What I'm going to try to do at the beginning and just use GitHub and then if you have time, we can implement that piece. It's really, really straightforward, you're going to see that. Then we're going to do CD, so try to deploy continuously on a cluster using VAMP. Did you already heard about that one, VAMP? It's pretty awesome actually. I discovered that one from the universe. So I just click sometimes on normally, I'm going to universe and catalog, click, deploy, see what's going on. So that was the one I discovered. I think it's really awesome, especially when you start with DCOS and Mezos in terms of networking and so on, especially for the CD piece when you want to do canary testing, happy testing and so on, that's really, really cool. So this is what I'm going to try and since we only have 45 minutes, it's going to help me, that tool is going to help me a lot to achieve that. And if you have time, yeah, telemetry, application insights and so on and so on. So wanting, it will be a live demo. I'm a huge fan of the office. I don't know if you know this series. So let's go. Let's start with fun. All right. So this is Azure. If you're the first time that you see Azure, Azure guys, guys, Azure, that's the portal. The cool thing with that, again, I'm not going to sell Azure. It's basically, and I'm going to be honest, it's the same thing that we provide in the same features that you can have on AWS or Google Cloud, right? With some extra, for sure, we have differentiation, which means with that everything that you can do from the portal you can do through the CLI. And so, yeah, that's a good question. So the question is the services that I'm going to show you right now, are there availability on all zones? Including Germany. Germany is a special one because Germany, we have a pretty nice story about that. That one is kind of silo for some legal purposes and so on. I'm not sure about that one. I can check later. Maybe Rob, you know, if you'd say, okay. I'm not sure that ACS is available because ACS, the services that introduce is at the beginning. Basically, ACS is just a bunch of VMs behind the scenes. So I would say yes because it's just VMs with best practices installation of the frameworks or DCS on that case. But let me confirm that maybe after the conference. So yeah, that's the portal. What I did, like I said, I already done that piece because it can take like between 10 and 15 minutes to deploy the services. I do a search for ACS, right? Azure Container Services. And I click on that guy here, create. And then he's gonna ask me, remember Microsoft, we like to do have some interface click and so on. Here he's gonna ask me the name, how many nodes I want, how many masters, the size of the VMs that I want, how many CPUs and so on. It's really, really straightforward. Like, let's, I would say 10 question and then he's gonna deploy everything, right? So I already did that. And also at that set, he asked me for the public key, right, the SSH key that I want to deploy on my cluster. And after that, when it's done, this is what I have. So we're not gonna spend a lot of time on that. Like I said, it's just a bunch of resources on Azure, VM storage, networking and so on and so on. And when it's done, actually, the way that it's worked on Azure, you just have to do a SSH tunnel on the master, right? So I'm initiating a SSH connection on my master using my private key and doing a forwarding port on port 80. And when it's done, I can reach the portal, the DCOS portal from the localhost because I'm forwarding the port 80 on my master on my localhost, make sense? Now it's really straightforward. Like I said, it's a basic, basic cluster. I'm running five nodes on that one here, five node and one, a five private node and one on the public region. So our six node, totally, one master. And what we're gonna do first, just to make sure that everything works and because we're gonna use it, we're gonna deploy Jenkins, right? So I'm gonna use Jenkins as is, no configuration, don't need persistent storage and so on, just for the demos, right? So we're gonna wait that piece to deploy. In the meantime, I'm gonna show you the code that I'm gonna use. So the link that I paste on the slide at the beginning is the repo by itself with all the code, like the kind of the monolithic application by itself. What I also did before that, before that, sorry, I split it in two services. Like I said, we're gonna try to do microservices. We're gonna fake that, just two services. One for the web frontend, running Tomcat and just my, it's not Angular, but it's a whole framework, JavaScript framework. You're gonna see my UI skills, really beautiful UI skills, but it's just a way to underline some JavaScript, right? Print some screen and then the API. For the API, I'm using Spring from Java. That's my framework. So I created two different repo, right? The client one and the order one. I also have the database Mongo. That one is just a basic repo. Basically, it's just Docker file, right? And I'm injecting data on that one, feeding like mocking the database. When you run the Docker file, it's gonna inject like fake data on that, just to have data for the demo, right? But for that one, the Mongo, which is not the really goal of this talk, it just let's say a stateful piece. I'm gonna still run that on DCOS, right? But maybe there's a more better solution in terms of storage and maybe we're gonna use a past solution and so on. But just for the demo again, it's just a basic thing. We're gonna pretend that it's run and that's not the focus right now. Just gonna focus on the client and the API piece. Make sense? So that's the client repository. I'm gonna explain that after that and that's the other one. So back to the cluster here. Still deploying, which is weird. Don't like that. Oh, it's running, okay. So I have my Jenkins install. This is something that I like from DCOS. I'm using like a lot of Kubernetes and also Swamp, which is I'm really focused on containers technologies. And what I like with DCOS, it's really, really simple to start, right? Just push a button and at least you can do proof of concept and it's really easy to start, right? So we have a basic Jenkins instance here. Like you saw that. No thing fancy, no custom configuration on that, right? Like I said, we're gonna use VAMP. So for VAMP, if you go on the official documentation, VAMP.io and I think it's not on support. On developers, documentation, installation. And go on DCOS here. So it's gonna just explain how you can install VAMP. Or if you want, they also have a package, a universe package here. If you do a sort for VAMP, you can also VAMP and sell VAMP from there. I prefer to install from the official documentation. So basically what the documentation said, you have to run Elasticsearch. So it just gave you the marathon JSON file here. And then you can install VAMP from another JSON file. So what I did, I already prepared that. So that's a repo with all my script, all my, what I'm gonna do during this demo. So if people want to repose that at home, you're more than welcome to go on this repo. I'm gonna share the link after that. So inside the VAMP folder, I have just copy and paste from the official documentation, the Elasticsearch piece and the VAMP piece, right? So what I'm gonna do from the CLI here, I have the DCOS CLI installed. I think it should be already set up like authentication and so on and so on. What I'm gonna do, I'm gonna, oops, sorry. I'm gonna install the Elasticsearch piece. Oh, I changed the folder. Add, oops, add. There's a scan. VAMP. Okay, I'm gonna push that. So I create the deployment. So just to quickly make sure we have the services deploying and what are we gonna do? We're gonna do the same thing. We're gonna wait that one to run but we're gonna prepare here. Oh, actually also the thing on VAMP told you that you have the ability to store artifacts on VAMPs. So if you want to have the persistent state for the artifacts, you have to use MySQL database or I think they're supporting few of them like MySQL, Postgre and Microsoft SQL Server. What I'm gonna do also, it's supposed to work without it but I'm not gonna take a chance. Like I said, it's a live demo so I prefer to prepare everything. I'm gonna also install MySQL and let's be crazy, use the default one from the universe again. So that one's supposed to run, okay. So it's running here. That one is staging. All right, so we're gonna let him deploy actually. Okay, training. I want to start the deployment on VAMP also and I'm gonna expand also the code, the pipeline, the Jenkins pipeline that I'm gonna use. This is also something that I prepared before, the Jenkins file because it can take a while but I'm gonna explain it. Come on guys, I think it's running, right? I don't know why he say it's deploying. Yeah, it's running. It's just to have the, I think the L check. So let's deploy VAMP in the meantime. Here it's supposed to create a folder like a group and deploy a bunch of services. It's gonna deploy VAMP and then like it's a microservices application. So you have the get way, you have the workflow services and so on. So we're gonna see that after that. Okay, in the meantime, let me show you the codes for the Jenkins piece. So are you familiar with the pipeline on Jenkins? Yes, no? Okay, so if you're Jenkins, I guess, and I hope so using the pipeline which is an awesome feature actually. We used to have on Jenkins like the freestyle job. I think the name is correct for what it was, right? At the end of the day, if you have like different bunch of actions on your pipeline, like running tests, when you need testing integration tests, depends of the way that you want to deploy. It was different bunch of freestyle project and one column, the other one and so on and so on. And then since the Jenkins 2.0, I think the implement this thing called a pipeline. I like to call that like the CI as code, if I can say that. The ability to store the CI pipeline inside a file. So if using Travis for example, CircleCI and all those fancy guys, cloud-based, right? This is also the way that they work, which is very great. You just store everything, all the pipeline in the YAML file inside your repo and then when you're gonna push, it's gonna win this file and run step-by-step what you wanna write, right? Because at the end, the CI tool is just an orchestrator tools. You can say, okay, run this script first and then that one, that one, that one. Makes sense? So this is exactly how the pipeline works on Jenkins. So if I start with the API here, I don't know if you can see, yeah. So that's the code, right? Can show you. Not gonna lie here. If you go on source, main, because it was in Java, that means a lot of file everywhere. I have my controllers, model and so on. So that's really, really Java file. And if I'm gonna be back to the root here, I have this guy called the Jenkins files. And here what I'm gonna do, first I'm gonna add Jenkins to clone the repo on my node, right? Then I'm gonna run the build. So I'm gonna build the Docker image. I'm gonna push this image to, I'm using the public repository here, but if you have time, we can change to private one. So I'm gonna build and push the code. As you can see here, I'm not doing any tests. Good practices, right? But I will say maybe that one step before I want to run the unit test, right? Since I'm using Gradle to build, this is what I'm gonna do. This is what I'm doing inside, actually. To build the application, I have to run the Gradle build packaging stuff, right? But I will solve a switch for tests. But I'm not doing it here. So I'm building, pushing, and then here I'm doing a bunch of magic. I'm doing sync command. You're gonna understand why. It's involve vamp at the end. And when I, basically I'm preparing some config file and when it's done, I'm gonna push those config file to the API of vamp, right? So I'm also preparing some other practices in terms of DevOps practices here. What are we gonna do? We're gonna do A-B testing. I don't know if you're familiar with that or can I be testing? So what I'm gonna do, I'm gonna push a code. So let's say my V1 code. And then let's say we have a high velocity. Maybe we're gonna push 10 times per days, which could be great. We are like Netflix or whatever. And let's say we're gonna go crazy, push 20, 30 times a day in my microservices. So I want to be able to do A-B testing. So maybe not go completely crazy and say I have a new version now. Let's already like 100% of my client or customer on this new version and see what's going on. Maybe it's not a good practices. So what are we gonna do? We're gonna easily, smoothly switch. So we're gonna keep the whole version. If I can say the current version, we're gonna de-plus the V-next version. So the version two. And thanks to vamp, we're gonna do load balancing. So we're gonna do A-B testing actually and smoothly redirect 20% of the traffic on the new services. Do some telemetry. Let's wait. And then apply to do 50, 50, 70, 30. And then 100% on my new deployment. Makes sense, right? Again, it's just for demo. Maybe in production you want to do smoothly. Here, what I'm gonna do, I'm gonna deploy, do 50, 50 and then 100. Boom. No time for go smoothly. But again, it's just for the demo that I'm doing that, right? So let's go back to DCOS here. So like I said, when you install vamp, because it's the way that it's code is to microservices also, it deploy a bunch of microservices, but the most important one is the vamp dashboard. It's that guy. So this is vamp, actually. What I'm gonna show you here, it's really like the straightforward usage of vamp. Actually I'm gonna use the API. But they have a bunch of resources. They have their own features, if I can say that. The blueprint, the bridge. So you can handle a lot of practices using only vamps. But me, what I'm gonna do, I'm gonna use the API. I'm just gonna call, as they call that, a deployment. So I'm gonna pass my new deployment all the time. So that's the seed command that you can see in the pipeline. This is what I'm gonna show you. So before that, what I can do, I can show you the basic hello world in vamp because I think it's interesting to know that. So what I'm gonna do here, I'm gonna add a deployment. So it's YAML base. So I'm gonna say here, I don't know if you can see in the back, that's correct. You want me to maybe to zoom. Is it good? Yeah, okay. So I'm gonna name my deployment. I'm gonna create like, they call that the cluster. So call hello world. I'm gonna, they call that the bridge. So a bridge is, let's say Docker image in my case, you can reuse this bridge for multiple deployment. Like I said, the way that it is deployed is really smart. When you're using that, it's gonna like split everything. It's like the gateway, they call that the gateway. So the way that you want to access the containers, it's called that the gateway. The bridge, it's like basically the artifacts that you want to deploy and so on and so on. Again, the documentation is really well done. But anyway, here I'm gonna deploy my hello world deployment. The deployable, which is the Docker image I want to deploy, it's geostrackr-webdebug. That's my own way to do my hello world, they said that. And then I'm gonna scale to those resources. And that guy gonna talk to DCOS and Mesos and it's gonna do all the magic for me, right? So I'm gonna save that, which means deploy. And also in the meantime, I'm gonna deploy as they call a gateway. So the gateway on vamp, it's just the way to handle the networking and also the AB testing stuff and so on. So here what I'm gonna do, I'm gonna deploy a new gateway called hello world. Here I'm gonna use that service port inside my DCOS cluster. I'm gonna use a virtual host. So I have a custom DNS name called junior.work hosting on Godaddy. So it's here. On Godaddy I'm saying that the hello dash world, I want you to redirect on my virtual IP on Azure because I'm on Azure, right? So when you deploy an ACS, you have a bunch of load balancer, virtual IP and so on on Azure. So I'm gonna reach my virtual IP for my node, my agent, my agent, right? And then vamp's gonna handle that. So basically that's kind of the Martin LB piece, right? And I'm gonna route that one to hello world, hello world. So remember on the deployment I have the name of the deployment, the cluster name, the web port so it's kind of, you have the indentation and so on. Again, it's really well explained on the documentation. And I'm gonna redirect everything 100% on the traffic on that deployment. Make sense? So I'm gonna save that guy here. So my deployment here is deploy. So if you go back on DCOS here, so you can see that my deployment dash hello world, whatever is running here. And let's see my gateway here. Also running with that virtual host. So actually let's try that guy here. And that's my hello world, right? So that's the basic, basic hello world running on them. Awesome GIF, right? Any question on that? No? Okay, I'm done. Thank you. No? Okay. Let me check the time, okay. So what I'm gonna do, I'm gonna remove my gateway. Gonna remove my deployment. Supposed to, okay. Because I like DevOps, that mean I like to automate everything, right? So now what I did manually, let's say that, copy and paste, new, et cetera, et cetera, what I want to do is kind of the same way, the same thing using Jenkins. On top of them, they have the API. So this is what I'm gonna use. And this is what I have at the bottom of my Jenkins files. So still, I didn't show you yet the ugly ML thing that I'm doing here. Not ugly actually, but kind of. But all the next step, the stage on Jenkins, it's using the VAMP API. I'm using the internal virtual IP inside my cluster. So this is on the default where VAMP is deployed. Using the API for each stuff that I wanna do, deployment, getaways, braid, et cetera, et cetera. Each feature I'm using inside, I can curl post or whatever I want to do using the API. And then I'm passing the YAML file. So the YAML piece that I show you, that I paste manually, this is what I'm doing here. This is what I'm playing with on my seat. So let me show you that inside the cluster now. So the first one I'm calling, it's PUMRPOrderDeploy.yaml. And I'm doing that twice, you can understand why. And then I have a getaway 50 and getaway 100. So like I said, it's really ugly, the way that I'm doing that. When I'm deploying, I'm doing 50-50 between the Vnex and the current one. And then 100, you can understand that. So basically I have a few files. So if I'm going back to my source folder, oh no, deploy folder here. This is where I'm hosting my file. So the first one that I'm playing with is that one. So it looks familiar, right? It's kind of the same structure that I use for the Hello World. But here what I'm doing, I'm going to tag my deployment here. Because not all the time, but it's going to happen that I'm going to have two deployments, right? The Vnex one and the current one because I want to load balance, right? So what I'm going to do, I'm going to use this ID tag. And this is the seed that I'm doing on my command line. So I'm going to do version one, version two. And then, so that's my first deployment. Let's say I want to deploy, I'm going to have two and three, three and four, four and five and so on and so on, right? And this ID tag is the one from Jenkins. So in the Jenkins files, you can call variables, right? Environmental variables. So I'm going to call the ID of the build that I'm running right now. Make sense? And then I'm just passing that all the way. So every time for the Docker image, so I'm going to build the Docker image and push with the tag of the build, right? So version one, the name of the image is going to be Docker build dash t, party limit LMRP comma one and so on, two, three, four. I'm going to push and this is the one again I'm going to pull on my cluster. Make sense? So that's the deployment by itself. Something. Also something here. It's more related about the Docker image by itself but I'm using one environment variable to the FQDN of the Mongo database. And that's actually a good reminder. I have to deploy the Mongo database. So what I'm going to do, since it's not really the focus of the talk, I'm going to deploy my Mongo database manually because I don't want to develop on that one for now. So it's a basic, basic, like I said, basic deployment. The only thing that I'm calling is my custom image but like I said, if you can go on the Docker file, it's calling the official MongoDB image but what I'm doing differently I just injecting, where did it start? I'm starting the demo and injecting like, I'm feeding the database with some data, right? This is the only thing that I'm doing. And I'm exposing on the cluster the default port of the Mongo database. And because it's going to be on the PUMRP database folder or group and the name of the container is also PUMRP database, this is why it's hard-coded, I know. But this is why here I'm calling that one. I'm using the internal load balancer, right? To reach my database. Make sense? All right. And then comes the gateway. So the 50-50 what I'm doing, still using the tag, right, on my build. I'm redirecting 50% of my traffic on the current version. So this is why you have the pre-tag and the ID tag. This is the two variables I'm using. The pre-tag is the previous version. It's the current version minus one. So if it's build number 10 is going to be nine, right? And the ID tag is the current version or the current ID of the build. Make sense? Actually I'm going to run that right now. You're going to understand. So what I'm going to do, I'm just going to copy the URL of my repo here and I'm going to go on Jenkins here. So I'm going to create a new item. Like I said, I'm going to use a pipeline project. So I'm going to call that one order API. It's a free start pipeline project. Sorry. So the only thing that I have to do, it's here I say I want you to grab the pipeline. The Jenkins file from this repository. Since it's public, I don't have to provide any credential, right? But if you want to use a private registry, you can do it. I don't have to change that one because it's the name of the file I'm using. Also what I did or what I have to do actually, back on my Jenkins file here. As you can see, so I'm building, but when I'm going to push my image, I need credential, right? It's not anonymous. I cannot push the image on the Docker Hub like this. So I'm using the credential or vault from Jenkins. So I'm going to create, I have to do that before. I'm going to create a credential called Docker Hub. So to do that, go on credential sections. I'm going to say here I want to add credentials. I just have to use the same ID. So this is my Docker Hub username. And that way, Jenkins is going to be able to grab and use those key, right? So now I have that guy. I'm going to start the build. So the way that Jenkins works on Mesos is pretty awesome. You have the master running our containers and every time I'm going to launch a new build, it's going to trigger Mesos task, run the build inside it. So that's why the first time is going to take a while because it's going to boot, or not boot, but create the containers, pull the, for the first time the Jenkins slave image and then run everything. So build the image, pull the official Tomcat image. You can, if you're curious, you can go on the repo see my Docker file. Again, it's from the official image. So that one is Tomcat or GDK, I don't remember. So it's pretty heavy because each other is going to download everything, build my image, push to the Docker Hub. And then inside my pipeline, I also add some validation step here because I'm asking the input. So I don't want to do everything like one shot, 50, 50, 100 and so on. You're going to ask me before, are you ready to do the next step? So it's going to deploy until that stage. So it's going to deploy to my cluster. And the cool thing with the gateway that I show you actually, you're going to see that, it's going to deploy my deployment. So it's going to deploy my API in that case. And it's going to create the gateway, but the gateway, you're not going to touch the gateway until it's going to ask me, do you approve the deployment? That means the gateway is still redirect to the old deployment, right? When I'm going to say, okay, I'm ready to have the 50, 50, then it's going to do the 50, 50 load balancing, right? And so on for the 100. I can still roll back if I want. So it's supposed to build. So also in the meantime, I'm going to deploy my, yeah. So that one's going to take a while because again, you have to pull for the first time the official, not a while, I would say like two minutes, but still. What I can do, I can create another pipeline for my client, right? So I'm going to call this client here. So the client here is not the same repo, is that one? Same process from Git and save. And it's crazy. I think I clicked, yeah, clicked. Okay, let's see if at least we have logs. Makes sense, yeah. And then he's also building the image inside the container, right? Maybe not the best practices, but I can say it's demo-wise. So any question at that step? No? Okay. Okay, so the build is done. Now he's pushing. So what I can do just to show you here, you can see, I don't know if you can see well, but he pushed with the tag one, because again he's reading like the environment variable from Jenkins, right? He's the first deployment here. So it's going to be one. So because I need two deployment to do load balancing, right? What I'm going to do right after that, I'm going to kick off another release to have one and two because in that way, we're not going to have the two deployment to do the 50-50 deployment, right? So the first one is kind of fake, but in production, you're supposed to have the old version or the current version and then push the new version all the time, right? Because here I start from scratch. I'm going to need at least two deployment. So this is why I'm going to wait that one to be done. So it pushed and I'm going to, so let's forget about that. You didn't see that one, right? Okay, and we're going to rebuild the new one. Okay, perfect. So what he did, he just deployed the first one with the tag number one. Like I said, we're going to need at least two to have the 50-50 deployment, right? So he's deploying the first image order. He created the gateway with the URL that I want, the virtual host. So this is also something that I specify in the gateway if you remember, like the Hello World, right? So here what I'm going to use is api.pumrp.julian.work like my custom DNS name, but you can use your own. And for the client, for the web frontend, it's going to be the same thing but without the api on the front, right? So I'm exposing the api publicly and also the web frontend on that case. So what I'm going to do, I'm supposed to be able to do that actually, just to make sure that it works. So yeah, if I'm going on that route, the dash orders is going to reach just a good way to try and test the connection between the api and the MongoDB inside my cluster, right? So I know that this route orders I have another one for code, so you can understand why because the application by itself, it's like kind of e-commerce solution, really light. So basically I'm creating new orders in my database. I can delete, I can update and so on, right? Reaching the MongoDB database. So that's the api, the good thing it's accessible from the outside, that's great. So let's see where the frontend now is. Here's, same thing. So he's asking me for that thing here. So it's deployed and again I'm gonna kick a new one, perfect. So same thing here. The client should be P-U-M-R-P, good demo. Or maybe because the route is not, should be okay. Come on, client should be okay. Let's try with another, the awesome Safari brother. Just the cache I guess, anyway. So just to show you the UI. So that's like I said, awesome UI skill from Julien. That's my application, I have something on the left here. You can see I can reach the dealers, can do codes. You can see I have some font issues. But supposed to be add, edit, remove like the basic crud stuff, right? And everything is back on the MongoDB database, right? As you can see, I have dealers, code, orders, delivery, some settings, settings have nothing. So I open for pull request if you want. And behind the scene again is my API, right? So now if I'm on the DevOps workflow, maybe it could be fun to, on the process I would say, okay, I'm gonna add a new route because I want to have a new feature. So I want to be able to add, for example, a way to browse the catalog, for example, see all the things that we can order online. So I would say, maybe here I want a new button called catalog and then it's gonna be the same way that I code the application. It's gonna reach a special route on my API. I can do a get, a post, delete everything, right? And that mean I'm gonna need a new space on my database to store everything. A new route and a new button on my web, right? So let's do that. Actually, let's uncomment the code. Yeah, I'm okay to do live demo, but not live code. Come on. So on the order API. So let's say on the, again, the database is not the focus for this talk. Let's on the order API. What I'm gonna do on the code is gonna be really, really ugly because I'm gonna do that to the GitHub web interface. So I have this catalog controller here. So as you can see, it's commented. So what I'm gonna do, I'm gonna edit this file and yeah, beautiful, right? And let's say add catalog route. So this is what I'm gonna do. Straight to the master, boom. Really DevOps. So in the best practices, yeah, branch strategy, branch policies, even on my Jenkins, I didn't implement that but you also have the ability to trigger from new pull requests or new commits. So I'm gonna just do and continue that guy here. And then we're gonna build a new, okay. So let's say we have the, wow, we have the CI implemented and the CD. So that guy is gonna kick off automatically, right? Because it detects a new commit from my repo. So again, it's gonna build a new code, push. And this is where it's gonna be interesting, right? Sends to the gateway and everything. Here I'm doing that only on the API, right? So that means on the UI, no impact on the UI. I'm just gonna add a route. So it's a good way to test behind the scene, right? So that means I'm gonna be able to do API.Pure, I mean the FUDN, WAC catalog. This is what I implemented. Again, nothing on the UI. That means the end user not supposed to know that, not supposed to use this feature, but it's a good way to smoothly deploy my new version of my code, right? So also in the meantime, what I'm gonna do, I'm gonna also implement on the web the same thing. So it's, okay, it's here. So on the web, same thing, I'm gonna uncomment that guy here, which is the button for the catalog. And also here, this is the button on the left. And that guy is gonna read and use the API, right? So catalog, again, straight to the master. That'd be crazy. All right. So in the meantime, what I can do, and also that's the beauty of the gateway, I can start to say here, so let me finish that piece. I can start to build, deploy on VAMP. I mean deploy and run that on my DCOS cluster. And still because I'm using the gateway, you remember when he asked me, are you sure to go on the next step, right? We want to confirm that. This is the way he's gonna actually redirect my end user. Before that, there is no impact for the user since it's a lot balancing stuff, right? I'm deploying and running my both application, the current one, and the new one that I just modified here. That means I can still do internal tests if I want. On VAMP, I didn't talk about that, but you can do canary testing and flag, for example, adding some HTTP headers and say only the, I don't know, those user with this IP or this user agent can reach my application, which is really great to do canary testing. So a bunch of different ways to test and try. So let's try that here with the other API. So it's deployed in the cluster. That means here on the deployment, I have PU, MRP, order two and the three, which is still deploying, but it's gonna be deployed. And my gateway behind that, it's that one, the PU, MRP API, and that one, it's redirecting 100% of the time here, as you can see. Sorry, here, I don't know if you can see well, but only on the two, right? When I'm gonna push the button to go ahead, deploy, it's gonna switch the two and the three and I'll direct 50, 50 on both of them. And this is where you have the, I mean the impact for the user, right? But before that, just to make sure that I don't have the catalog route, what I can do here, actually I'm gonna use Postman, because Postman, I think I have no cash with Postman. Yep, here. So if I'm doing ping, that's a root code. I know that I have data, so just to make sure. So catalog, I have no found, so I have a 404 Euro written by Spring. So now let's say I want to do the 50, 50 deployment. Let's do proceed, so it's done. So if I'm going back to gateway here on my API, as you can see we have 50, 50. That means here, without any logic, okay, I should have 50, 50. So I have data and 50% of the time. And again, the cool thing here, I'm not impacting my user because we didn't touch to the client here, the web, right? So if I'm refreshing that guy here, I have nothing here, until I'm pressing the button. Again, right? So let's do that for, and let's do move full, actually. Let's do 100% of the Vnex or the new version to implement the API. And the last step here is just to make sure that remove the version two for the previous version on my VAMP to don't have like 10 different deployment on my cluster running at the same time. And go back to the client here. Here, let's say, okay, I want a 50, 50. Same concept, right? The gateway, 50, 50. So if I'm refreshing here, I know that I have to do clear story here. Let's try. Oh, I have the catalog here. So still a bug here. And I have my catalog here. And again, that one, because the API is 100% of the timer direct to my API, is supposed to work. But again, user agent, custom flags, can always testing could be implemented here. That means we can do really, really, really DevOps fun stuff here. And again, let's do, I'm agree with that. Let's do proceed here. And it's gonna switch 100% of our traffic on my getways. So, no time left, unfortunately. What I can offer you, if you're gonna download my slide, I don't know if it's too late to update it. But anyway, if you got my repo, I'm gonna just show you that here on my GitHub repo story. I'm gonna create like, I already did that, actually. This repo called democonf. You should be able to see all the, if you want to reproduce the demo, all the code is open-sourced. If you want to retry at home and so on. And I'm gonna also paste a video. I'm gonna record the video with that plus the different practices like the telemetry. I didn't talk about that and so on, so on. If you want to go further on that. So sorry, we don't have time for question, I guess. But if you have question, let me know. I mean, I'm just gonna be around. I'm gonna have another talk tomorrow about auto scaling on DCOS and mesas. Thank you very much for your time.