 OK, so thank you very much for having me. So I'm going to be your host today. So let me just bring up the slides and exit this one. OK, so before we start, how many of you have been? So this is session number two. We were lucky enough to do one before this in September. Have you been to the previous one? Raise your hand. One hand here. One at the back. That's it. OK, that's really good. So what we're going to do today, we're going to have a look at this Kubernetes thing. And how does it work? How do we play with it? So that's the goal of today. So most of the time, you will get stuck. So you will need to call myself. And my name is quite complicated. So first thing, let's sort this out. You can call me any of these names. And I will come to you, and I will help you out. So you can call me Danielle, but please, please, please, do not call me Danielle with the French name. That's the only rule for the workshop. So who am I? I've got a Kubernetes certification. So I think I've got some credentials to teach Kubernetes. And I was lucky enough to speak at KubeCon, which is the Kubernetes conference, twice in Shanghai. And it was a very, very nice experience. But we're not here to talk about me and what I do. No one cares. We're here to talk about Kubernetes and how we're going to use it. So the goal of this session, these three hours workshop that we're going to do, is to learn how we can build applications. So if you are a developer or someone who's interested in just deploying application, first of all, we need to understand what do we need to use? How do we need to change our code? Is anything we need to change when we deploy to Kubernetes? So that's the first thing we're going to do. And the second one is going to be, Kubernetes comes with some rather unusual terms. We don't call things with their names too easy. We don't call Lord Balancer a Lord Balancer. We call it service. We're going to learn how to speak the Kubernetes jargon. And then the third one, I think this is quite ambitious. But the material that we have today goes also into deploying locally and then deploying into the cloud as well, into AWS or Google Cloud. I think it's going to be a little bit ambitious to cover that as well. But the material covers that. And you can practice in deploying into the cloud as well. So what we're going to do today, right? We've got three hours. We start a little bit later. Maybe it's not going to finish at five. We're going to finish a little bit later than that. But the plan is like this. So we're going to build an application, first of all. So we need to deploy something into this cluster. So we're going to build our own application. And then we're going to package it as a container so that Kubernetes doesn't understand applications like Java or Node.js. The only thing it understands is containers. So we're going to have a look at containers. And then I've got part two twice. OK, that's a mistake. This should be part three. But we're going to have a look at how we deploy that into Kubernetes. And we have a look at what kind of things we need to do. Now, there are more parts that we're not going to do today. But you can do it when you go back home. So you can take the material with you and practice when you go back home. And those parts are basically scaling and having state and deploying into the cloud. That's unlikely we're going to be able to cover that today. And then I hope you have time. But if you complete the last challenge, you also get a certificate of completion for today, which I think is quite nice as well. So how does it work? So I don't think you want to listen to me talking all the time. And you want to get your hands dirty. So we're going to have a little bit of me explaining what we're going to do. And then you're going to do it. And it's going to be myself and the rest of the people in the back just helping you out, making sure that you get these applications deployed into Kubernetes. I've got a very ambitious agenda. And we're already late. So we're probably not going to finish at five because we started a little bit later than that. But the plan is to finish in three hours anyway. So Kubernetes is a little bit tricky. So sometimes you might notice that your mini-cube doesn't work or you don't have the right version of Docker or something else. Now we're going to help you out. But sometimes you're going to get stuck. And instead of just being stuck and fixing your problem, it makes sense if you pair with the person next to you. And you just get it through. We're still going to help you fix everything. But I think you should follow the rest of the material as well. We've got Slack. So if one gets usually get lost, they can connect to Slack and they can send me the links and I can fix it. But other than that, I think it's not absolutely necessary. So that is the plan. Before I start, is there any question? How many of you are stuck installing Docker or mini-cube or cube CTL? Raise your hand. I can see, OK, well, we are sorting that out. We are working on it. Anyone else? OK, don't worry if you, yes, I'm going to help you out. Don't worry if this is broken, just raise your hand. We will help you out. So this is the second time we run the workshop. So please be nice. We haven't found the perfect way to teach these Kubernetes in just three hours. We usually run three days workshops. So three hours is a lot less. So we're trying to do our best to make it work. Is there any questions? Are you ready to start? I don't see you excited. Are you ready to start? Oh, there we go. OK, let me grab the slides. OK, so what we're going to do today, we are going to build an application. OK, first of all, we need an application to deploy into Kubernetes. And this application looks like this. It's called Knote. And what it does is a simple, very simple note-taking application. So you can write some content. You can click on Publish, done. It's going to save some notes. We're going to add something else into the mix just to make it a little bit more complicated. And what you can do, you can also add a file. It's going to upload the image. And it's going to insert that image into the text area. Please notice what kind of glibberish is there. This is markdown. I don't know if you're familiar with markdown, but it's just basically plain text with some extra markup. So when we save that and click on Publish, we also get the image as well as the text. So this is a little bit more complicated than just saving text. We're also saving images, which is going to give us the opportunity to explore other areas as well. So the application that we need to build, we have a couple of requirements. So we need an application, something to serve these pages. We need a database. We talked about files. So we need also to store these files somewhere. So if I think about it, then I definitely need something like either Node.js or Java just to serve the application. Then I need a database. I pick Mongo because I don't need to define a schema or anything like that. So it's a little bit easier to get going. And I need to connect those two together. So the plan is to use just static pages. So if you are a React fan, I'm sorry. I'm going to disappoint you today. We're going to upload files and all the rest. So this is a very simple application. So you won't be able to deploy these to production. You have multiple users and so on and so forth. But maybe if you want to go back and refactoring and not whatever you want, that's a good opportunity. So that is the plan. And we're not going to use Postgres because it's going to be more complicated. So we're going to build this application first. I wanted to build it locally. And this is what you're going to do. You're going to create it, connect it to MongoDB, and then test it locally. I want to see you uploading images, you uploading text, and just seeing the application locally. Why? Because first, we build it locally. Then we package it as a container. Then we put it inside Kubernetes. So this is the first step. So the way we do it, I know that you might miss some of the steps. We've got also a GitHub repo. But we've got this URL. So you should sign up to this website, which is called academy.learnkubetages.io. You go inside. There is a page which says sign in. Just enter your email address. Click on sign in. And then you get an email back. Just click on the link, and then you're logged in. So everything we do is going to be from the academy. So you can follow along. And you can click on Start. And then it's the first module. How many of you here are doing the Java version? I can see a couple of hands. Last time, we didn't have anyone. So I'm really glad that even you have people with background in Java. So if you are in Node.js, just click on the Node.js version. Otherwise, click on the Java version. And then the first part is actually describing how to write this application. Now, last time, so what I want you to do is to go and do it. But before you go and do it, I want to show you what the end result should look like. OK, so you're going to clone a repository, right? Either download or clone a repository. What kind of repository you need? So we have two repositories. I'm going to show you the JavaScript version, but the Java version is the same. So we have a learnk.js, k-note.js, or Java. And this repository is what we're going to use today. It's got five different chapters. We're going to do the first three chapters, OK? So you're going to download this, either clone or download it. And then you're going to have it locally, like myself here. So I can say, ooh, it's quite busy. So then we're going to do part one. So you're going to do to CD01. And you're going to install all the packages with MPM installed, or if you're using Java, you do a maven install. You're going to install all the packages. And then I want you to start the application with, let me just, how do I make it? OK, that is not going to work, OK? So what I want you to do is to start the application. So the application is failing, right? Because there is no database. So what you will need to do is to just start the database as well, OK? So once you do that, the application is available. We click on the application, OK? And then it's running. Then I want you to try that it actually works. So is this OK? Yes? You upload it, and then I will do that. Yeah? You can publish it and then check that it actually works. OK, so that's the end goal. Now I'm going to go back to the slides. So this is the plan. You go on this URL, you look at the first page, and then you will find the steps to actually download this and then run your own application, OK? So that is the plan. We have half an hour to do that. So it's going to be very, very quick. And then after that, we're going to start with Docker. So you don't need to complete this part to enjoy the second one. But I think it's still relevant. It's still better if you do, OK? So myself and the others are going to be around and help you out. But the plan is launch the app. Upload an image. Add a note to your note up, OK? So some of you managed to get the application up and running. They had a note and they uploaded a picture of a cat. If you did that, well done to you. If you haven't, don't worry. We're going to try again. But this time, we're going to do it with Docker, OK? So I'm going to start the next part. And we're going to talk about Linux containers and Docker. So I suggest that if you are stuck or someone is helping you, we pause for a second, we do the lecture, then we go back, OK? You can decide to go back and do the previous session or you can do the next one as well. It's up to you, OK? OK, so let's have a look. So generally, you are at work and this is the kind of discussions you have with your colleagues. I built a Node.js app. It's great. And someone comes to you or a Java app. Someone comes to you and say, OK, can I have it? There's a zip file. And so of course you can. Here you go. There's a JavaScript file, some files, some images. I'm going to give it to you as a zip file and done. The person goes off, they run, and it doesn't work. It's rubbish. Why? Because we don't have the right version. It's missing stuff. Had stuffing stored in my laptop, it doesn't work. But it kind of does work, right? Yeah, it does work on my machine. But what about someone else, right? So this kind of problem, they tend to be quite common when you have things like libraries, which are dependent on your operating system. So you might be familiar with a technology called Puppeteer or Fumton.js, which are basically like browsers that you install locally. And then you can do things like Selenium, or you can do a bunch of different things. And then these two kind of binaries, they tend to be compiled for an operating system. So if I have a Windows machine and you have a Mac, how do we share the same file, which doesn't work? So I cannot just simply zip something and give it to you. It's never going to work. And then if you are in a sort of JavaScript community, it's even worse because we've got things like SAS, for example. And not only there are different versions for different operating system, but there are several different versions for also the Node.js that you're using, right? So it might work with version up to 11, and then it fails with 12. And it's very, very annoying. What if we could have something, right? Like the zip we had before, where we just grab everything together and we package it. But this time, all of the dependencies, right? Even the dependencies like SAS or Pappeteer are packaged as part of a zip. So basically, this zip file, this archive is self-sustainable, right? There is no other dependency on operating system or anything else. So we call these sort of packages docker containers, or just containers in general, OK? So docker is the technology that popularizes them, but it's not the only technology that can create containers. So and what are these containers? Obviously, just like the zip file we had at the beginning, just with some extra stuff, OK? And those are what we use to encapsulate our files as well as dependencies to other libraries. So we talked about docker. This is the most popular one, and that's the one you also installed locally to join this workshop, OK? So how does this docker work? So usually, it comes in two parts, OK? So there is a docker CLI, so a command line tool, where you type the commands. But that binary doesn't actually do the work. It's just sending the message to someone else. And that sound else is the docker demon, OK? So when you say, I want to run a container, that message goes to the docker demon. The demon creates the container, OK? So what we're going to do now is we're going to package the application as a docker container, OK? So we're going to write this file called docker file, which is a recipe to create the container and describe what kind of things should go inside, OK? And then we're going to use this other command called docker build. And docker build, well, it does it. We're going to give it a name. So dash t is for name, OK? Actually, it's for tag, but pretend is name, OK? And the other one, this is really important, right? You see here, there is a dot, OK? That means where this docker file is located. And dot means the current directory, OK? If you forget the dot, you will see an error, OK? If you see an error, just raise your hand. We will help you out. So how do we build these docker containers, but how do we run them? So for running docker containers, is we use docker run, OK? And then we've got an extra flag called ti. I usually like to remember these as terminal and interactive. So I can play with the terminal inside a container, OK? And then the name of the image that I want to run. So before we were building docker build dash t up, so I need to put the same name there as well. OK, so we're going to run it. And we're going to see the application running inside a container. So we're going to do exactly what you did before. But this time, we're going to run the application from within a container, OK? So that's what we're going to do. You write the docker file. You run this docker file. And from a recipe, you create an image. Once you have that image locally, you need to run that docker image and run the application again. So the result will be the same, OK? You will see the same application with the same cut. But it will run between the container instead of running locally on your laptop. So again, all of this is in the same material as you saw before, right? So you can just go and click on start. And then depending on which version you've got, and then the part which we are focusing right now is this one, right? The second part, where we build a container. So I want to show you what it looks like and what the end result should be. So if you're following along and you have a repo, then we're looking at version number two, OK? So there's a folder zero two. So I can see what's inside. And I can see that there are basically the same files as before, but there is an extra file called dockerfile. Inside this dockerfile, there is a recipe to build what is basically a zip file, an archive. And what it's basically saying is, from which image I should start, this is like an ISO. If you're familiar with Future Machine, this is basically the base image, the base ISO that I want to start, the base operating system I want to start. And the second line is a little bit cryptic, but it's always saying is, copy all the files that you see locally and put them inside a container. The third line is, install all the dependencies because we need to run this application. And the fourth line is, the default command when this container runs is to run the application. So when I have this, I can just do a docker build, that's T, dot. And this is going to go and execute these steps one by one and build this container. So it's going to take a little bit of time. It's going to download all the dependencies. And eventually it's going to create a container like here. Now with docker, now I've got to build the image so I can run it. So the way I do it, I do a docker run. Now hopefully I've got it, yeah. I've got it prepared in advance because it's a long command. So I do a docker run. And then I start my application. So the application, you can see here, there are a lot of flags. So what are these flags? Let's have a look. So first of all, there is a P flag. The P flag basically means, where should I export this port of my laptop? So I can visit the application. Then there is a dash E for environment. So this is how we pass information inside a container about where is the database located. And the other three lines are sort of a little bit more enigmatic. One is the network. I think I'll ask you to trust me on this. The other two, we remove the container when it's done. And the name of the container is K node. So we run this. I can enter. And my application is broken. Why is it? No, running MongoDB. Well done. How do we do that? How do we run MongoDB? We're going to run it as a container again. So everything you run in this session is going to be as a container. So I can type docker run. Ooh. OK, and this command is actually going to run MongoDB as a container. So before you had MongoDB, you had either to install it or to run an extra command in Node.js to get it running. Now we actually just download it like this. And you can see that it is working. So if I go back, I can see my application now is running, right? Because there is a database where I can save all these notes. So I can click and look host. And I can see the application up and running. It looks the same, but this time it's running inside the docker container. And the good thing is if someone asks me, please can you share the application that even if you're running on Windows and I've got a Mac, we can still see the same application. You won't get a surprise this time. So I want you to do the same. If you're running docker toolbox, you won't be able to get to this application through local host. You will need to have the IP address of your docker machine. I understand that this is cryptic. So when you get to that point, raise your hand and we will help you out. So we're going to have 45 minutes now to try and run the application as a docker container. Any question? All good. So some of you managed to get through the docker containers, so you package the application as a docker container, and you run it, and you saw exactly the same thing as we saw before. We said that the difference is that now you can grab that container and give it to your friend or your colleagues, and they will see exactly the same thing. And that's the benefit of containers. Now, I'm going to go back to the presentation and we're going to discuss how these containers are useful. But when there are too many containers, what do we do? So if you're doing the material, I'm asking you to pause it for a second. We're going to do the lecture and then you can continue. But I think it's important that you listen to the next part. So generally, what we have is we want to deploy applications inside servers or visual machines or Raspberry Pi. I'm not judging. You can do anything you want. But generally, what we do is we provision the environment. So we install the runtimes, we install Java, and then we place the application inside. Now, if you have to do that just for one application, that's OK. But what if your server needs several different binaries, several different applications? You need to spend the entire day just clicking and installing stuff, .NET framework, Java, whatever you have. But I don't know if you noticed. With Docker, we just need to do a Docker run. It could be Ruby, it could be Java, it could be Node.js. It's just a Docker run. So in reality, all the application that you build, they look the same. As long as you can do a Docker run, you can deploy them inside your server. And that's the beauty of Docker container. And that's why they are so popular. And many people are using it. So easy to share, so easy to run. So we have our application, and we just wrap that inside a Docker container, and we've wrapped the database inside a Docker container as well, and then we connected those two together. So ideally, I want the same inside a server as well. However, when I run this application scale, I think there are a couple of challenges. I've done Docker run for MongoDB and Docker run for the application, but what if my note-taking application with a cat is extremely popular? Are you ready to go in and type Docker run 20 times? I mean, maybe you can. But what if it's extremely popular, right? Are you going to get inside your vision machine, inside your server, and type Docker run 100 times, 1,000 times? That's a little bit tricky. The other thing that is tricky is that sometimes we have so many Docker images, right? We've got more than one single machine. So we need to go inside that machine and launch a container, exit, go in the other one, launch a container. It's very, very inconvenient. Well, it's inconvenient if you've got three of them. Well, if you've got like 100,000 servers, right? Like Google scale, or even if it's not Google scale, right? You might do like machine learning, and you need like 100 machines. That's a lot of stuff. So is this Docker useful at all? It's rubbish again. What should we do? Well, I think everything was simple when we had a single machine. Everything is simple because everything can be done locally. But it becomes complex when we have more than one machine. Now, again, we can blame Docker for that. But the reality is that this sort of problem is solved by another set of tools called container orchestrators. And there are several of them, and you might have heard of some of them, like Mizzuz, which has been renamed by the way, Nomad, or if you are working with Amazon Web Services, you might have heard of ECS, the Elastic Concerned Service. So those are orchestrator, right? Like Kubernetes, the only thing is that it's not a fair fight. So Kubernetes is by far the most popular container orchestrator. So what is this Kubernetes and how does it work? Oh, do you want to cover Nomad today? Is there a reason why Kubernetes won't open? I think, let me finish the presentation and then I'll address your question, okay? So Kubernetes, so what is and how does it work? So first of all, the reason why Kubernetes is so popular is because what we basically do is we just have a collection of machines and we just install the master node, make the other computer join inside the cluster and they will just behave as a single unit, okay? So that's really powerful. So when we deploy something, we just deploy that something inside a single unit, which is much easy to reason about. The reality is that Kubernetes will take over and then we'll just deploy inside the service, right? But judging where it should go, where these complications should go and how many of them, right? So what we're gonna do today is we're actually gonna try and deploy this application. So in this case, I've got a red page, but the reality is that we're gonna build, we're gonna deploy our own application, our no-kit-taking application inside Kubernetes. So what we're gonna do is, generally we want our users to go to the application. So when we have two instances of it, how we distribute the traffic to the two instances, if you were to do it right now without Kubernetes, what would you use to distribute the load? I would use a load balancer, right? And that's what we do in Kubernetes as well. The other things that sometimes happen is you might have more than one application, right? You might do microservices. So how do we distribute the traffic between the two applications? How do we route the traffic between the two? Any idea? So sort of a gateway, more like a router, right? We need something to distribute this load. If this is for app A, we're gonna send it to the left. If this is for app B, we're gonna send it to the right. So in Kubernetes we have the same, exactly the same component, but in Kubernetes we like everything to have a different name, because we want to feel special, reality, okay? So instead of calling load balancer, we call them services. Instead of calling load balancer, the external load balancer, the black bar, then we call it an ingress, okay? And then why stopping there, right? I mean, we can call with different names everything. And this is exactly what we do. We call the application parts, right? That's how it works. So these are the basic components in Kubernetes. So if we've got parts, services, and ingress. But there is another one which, it's not in a picture which is extremely important that this is what are you gonna use right now to deploy your application. And that's called a deployment. So deployment is basically a recipe to create instances of your application, okay? So you say the application looks like this, and then you're gonna say how many copies? The deployment is in charge of creating those copies and then watching over them, okay? So if you want to say, if you say there are five and one is the leases, then Kubernetes will bring that up, okay? So deployment will do the job of just watching over and make sure that they always work. Now, everything you see here is what we're gonna do right now. We're gonna deploy our application, then we're gonna use a deployment and a service to expose it and then just have our application inside the cluster and maybe scale it and see if deleting one part is gonna recreate it. To do that, we need the cluster, okay? So Kubernetes is basically just a container orchestrator. So we need a Kubernetes, so where I go and buy one. And it turns out that you install MeCube and MeCube is a local cluster, okay? And the reason why we find it useful is because it's very easy to start and it's very easy when something goes wrong, you can just delete it and restart it, okay? Generally, we don't have the same sort of luxury when we deploy things in the cloud. So you cannot just simply go and delete the dev server, right, or the dev cluster. I mean, you can, but the consequences are a little bit different. So what we do is we have a local cluster. You're gonna start MeCube, okay? And then we've got something else called kubectl, which is a command line tool, which is gonna send the commands to the cluster. In the same way you have the Docker CLI sending the commands to the daemon, then we have kubectl sending the commands to the cluster. So we're gonna use kubectl to send the deployment and then to create this cluster. So that's how it works. Now, it would be too easy to just send commands. So in Kubernetes, instead of sending commands, we send descriptions of what we want, okay? And these descriptions are done through a language called yaml, which is just a configuration language, right? And we say what we want, how many copies, and then all this sort of configuration. So generally the question I get at this point is do I need to memorize all of this? Yes. No, I'm joking. You don't have to. Okay, this is very complex. I'm not expecting you to remember any of this. I mean, maybe a little bit, just this one maybe, just the image, right? What kind of things I want to deploy. But everything else with time you understand how it works. So don't memorize this, just have a look at it and pay attention to the last line, that's it. The language is called yaml and it's basically just a superset of JSON. So if you want to, we generally don't do this. But if you want to, you can also write the same definition as a JSON payload. It's much harder to write because you need all the quotes and all the curly braces. But if you like hard things, you can do that as well. Okay? So when you send that request, it's gonna get to Kubernetes. Kubernetes will eventually create these resources on your behalf. So now this is exactly what I want you to do in the next session, right? I want you to package the application as a container. I want you to deploy that container inside the cluster and see the same application deployed inside the cluster. So you're gonna create a local cluster with mini-cubes so mini-cubes start is the command and then you're gonna deploy the container you just created inside the cluster. You will also need MongoDB for that and make sure that the application works. Okay? So that's the job. As usual, you find all of the steps in the documentation, okay? So you can just follow along. But I'm planning to show you what it should look like in a second. So as usual, the section we're looking at at the moment is called deploy into Kubernetes, okay? So let me show you what it should look like at the end. Okay, so you're gonna start the cluster with mini-cubes start. So mine is already started, okay? So it's gonna be hopefully. Hopefully it's already started. Maybe it's not. So you're gonna see the cluster coming up. I thought it was already started. Okay, done. That's the end. So at that point, the cluster is running. How do I know that? I usually type mini-cube status. Okay, it's telling me it's running. Job done. How do I check if it is really running? If I don't believe in a mini-cube status, I type kubectl version. Kubectl version is telling me a lot of things. But one of the things it's saying is the client version and then the server version. So the client version I get always and the server version is the current server I'm connected to. So it is actually working for me. The next step is to go in the right folder and then define what I want to deploy. I do that by defining this YAML file. So this configuration. So the configuration usually comes, you can have multiple files or in like in this case, I can have a single file and then have these sort of free dashes to the limit resources. So the first resource I've got is the service. We talked about the service as just a load balancer. And then the other one is the deployment. So in the deployment we say what the template, which is basically what this resource should look like when we create instances, okay? And then we've got the same sort of configuration variable that we had when we had containers. And then a container port. So I think some of this might be familiar. Some of it won't be quite complicated like this part, for example. So I think for the time being, we don't need to understand this to deploy to Kubernetes. So that's what we'll do. So when you have this description locally, the next step is to just submit that to the cluster. We use the apply command and I'm gonna deploy MongoDB and I'm gonna deploy the application, okay? And that is gonna create the resources. How do I know that? I can do a kubectl get parts. And I can retrieve what kind of application we're deployed. And I can also retrieve the services. So these sort of load balancers. And I can see that both of them are working. So how do I access that? I can do a mini-cube service and then the name of the service and I see the application up and running inside Kubernetes. And I can test it. Oh, okay. I'm not gonna test the images. There we go. All up and running. So I want you to do the same. Start the cluster, write the definition, deploy, show me that it works. We'll be running around just helping you to get stuck. Okay, so I know some of you are deploying to Kubernetes. Some of you managed to deploy to get to the next part, to get the certificate, to pretty much done all of it. If you haven't, don't worry. It takes time and practice, okay? And if you are on Windows, it takes even more time and more practice, okay? So what I wanted to show you today before we close this session. So now I'm just gonna do a quick, I gotta point out the next two sessions that you can do at home to complete the material. So there are basically three things you can do. You can do and scale your deployments, right? And have more instances of the app so you can handle more traffic. And then you can deploy the application inside AWS, right? So the same config file, the same YAML that you wrote today to deploy locally works inside AWS and works inside Azure and works inside Google Cloud, okay? And that's one of the reasons why we like Kubernetes because we can use the same YAML files everywhere, okay? So I wanted to show you what it looks like. So when you go back and you do the material, you have an idea of what it should look like. So the first thing I want to discuss is scaling, right? So this is the kind of scenarios we have now. We have a single pod with a single deployment and the traffic so flows through it. But what you can do, you can use this deployment to create more instances of your app, right? You might say I want to. And then what the service is gonna do, so this load balancer will basically just distribute the traffic from one, you know, to one and the other, right? It's gonna distribute the traffic evenly between the instances. However, is there anything wrong with this design? Is there anything that is not working when I use this application? I assume in that we have MongoDB running, okay? Is there anything wrong? IP address? Close, what about the images? Where are the images stored? I mean the images are in the cut picture and then any other pictures you've used to test it. Yeah? So I can see here an opinion, anyone else in the middle or towards the end? What do you think is gonna happen? Stateful, stateless. See what is inside the pod. See, so if we upload the cut picture, right? Then that cut picture will be stored just on one of the two instances. And that's okay, but when you try to, when the traffic is routed to the other one, when you visit the page and you don't hit the same instance, right? You won't be able to see that image. The application is broken. So we don't want that to happen. So how do we fix this? Put it somewhere else. Put it somewhere else. Make it someone else's problem. I like that. So that's actually what we do. So we call this kind of problem state, we kind of call this problem stateful problems, right? Or just stateful applications. And generally when we deploy something in Kubernetes, we want it to be stateless, right? Why? Because we want to create end copies and not worry about picture of cats or anything else. So what do we do? Well, we have our application in Node.js or Java. We have our MongoDB. And generally what we do is we upload these images into something like a block store. Or like, you know, somewhere where we can put images. If you're familiar with AWS, that's usually S3. If you're an Azure, there is block storage, GCP, not sure what's the name is. But everyone has got their own. Now, there are tools that also mimic AWS S3. So just, you know, a place where you can put these files. So a popular one is called Minio. And Minio is just something that we can deploy it inside a cluster. And it's basically gonna give us an API where we can just call and upload files. Okay? And that's okay because if the files are not on the file system anymore and they are in a central location, then we can have as many instances that we want of our application. And they're all gonna retrieve these images from these sort of other database for files. Okay? So eventually what we want to end up is we've, you know, every time we store a note, that note goes inside MongoDB. Every time we upload an image, image is storing sort of a similar database but just designed to store files, which in this case is called Minio. But it can be any other sort of service that does that. Okay? So that's basically what you're gonna do in the next session. But some of you might have started this, but that's the idea. So you can try and refactor the AMP to make it stateless and then trying to scale it and see how it goes. So that's the next part. And there is another one which you might be tempted to do. Unfortunately, you will need a little bit of AWS credit for the last part, which is basically you deploy into the cloud and then the same YAML file works locally and inside the cloud and you have a fully functioning web application. Okay. So, so what we've done today, so this is the end by the way. I don't have any, we're really over time a little bit but we also started late. So what we've done today is we build the application, right? So if you use Node.js, it was easy. If you had Java, unfortunately, it was almost impossible because we found so many issues. Apologies for that. You package it as a Docker container, okay? So we understood how we create archives and how we can use them to pass them to our colleagues. And then the third thing that we did was to deploy these containers into Kubernetes. And then some of you managed to scale them and some of you managed to see how everything works. Some of you managed to do all of the remaining parts. Good to you. And I've heard that someone got certificates as well. Very well done. At the end, yeah? I think you should try as well to deploy this application and get your own certificate. I think that's quite rewarding. So I'm going to say to all of you, well done for getting this far. And I think this slide is a little bit broken. I'm not sure why. Yeah. So I just want to say thank you to everybody, everyone who helped, you know, going around and fixing issue and a special thanks to Madhvi as well. So she was, you know, the first session, she managed to do all of it and today she's here helping you out. So I hope that, you know, so Kubernetes is extremely hard, okay? But you can do it, okay? And then the next time you're going to be the one, you know, standing here and explaining how these things work. I really do. I think, you know, you should try and we're going to run this session again maybe. And then I want to see you, you know, with me next time, helping others, okay? So what's next? Practice makes perfect. So if you haven't finished the material, I think you should, right? It's very useful. And if you have any questions, oh, just ping me. My name is Daniel. You can find me on pretty much everywhere. Twitter, LinkedIn, whatever. I'm not trying to hide. And the material is for you to have. So everything you see, all the PDFs you have today is going to, we're going to keep, it is going to stay free forever, okay? You can use it and we update it. And then the other things we're doing, we're adding more languages as well. So maybe you don't know Java, maybe you know Python, maybe you do .NET. We're going to add those in as well. So this is not going to change. If you're interested in the rest of the content, we offer a discount so you can get it today. It's a little bit of a off, but you don't have to. The other material is going to stay free forever. So we are this company, we basically do training. You can see, we run just running in London. We run one recently in Singapore, which was quite fun. And in general, we just travel around and we do quite a lot of training in all sorts of interesting places. That's what we like to do. So thank you very much for coming. And I hope I'm going to see you again on Thursday. So we are running the Kubernetes Meetup. There are going to be two interesting talks, but I've got a surprise and I'm running actually a quiz night on Thursday. So we're going to ask questions about Kubernetes and there might be prizes. I want to make sure that everyone enjoys, so I'm not going to ask hard questions. I mean, maybe one or two. But if you enjoy that kind of stuff, I hope I see you on Thursday. And that's pretty much it. If you like this sort of thing, please tell me how bad or good it was so we can make it better for the next class that we run here in Singapore. So if you don't mind filling in just like five seconds feedback, please do. And thank you very much for having me. All right, so that's...