 How to deploy your Spring Boot application in Amazon-managed Venetius service and that is by DivineOdazie. So if you have any questions regarding this while this section is going on, you can type it right in the chat and we'll get it once the tutorial is done with. In this class session, we'll be first, you create this Spring Boot application then you dockerize it. Hello everyone. My name is DivineOdazie. I'm a developer advocate and technical writer and also an AWS community builder for containers. So today, you learn how to deploy your Spring Boot application in Amazon-managed Kubernetes service, also known as EKS. The steps you follow in this particular session will be first, you create this Spring Boot application, then you dockerize the Spring Boot application. After dockerizing the Spring Boot application, in order to understand how Kubernetes works before we move on to EKS, we'll deploy the Spring Dockerized image locally on Minikube which is a one-note Kubernetes cluster. Then in order to use the application on EKS, we'll push the dockerized image to Amazon Elastic Container Registry. Then next, we'll create a Kubernetes cluster on EKS and after that we'll deploy the application from the Elastic Container Registry to the Kubernetes cluster. And finally, we'll enable access to the application via an external IP address EKS will provide. Before to properly follow through, you need to have docker installed on your machine. An IDE for this, I'll be using the IntelliJ IDE. And you need to have some understanding of Kubernetes as we're not going into some aspects of this session. Then you need to have Minikube installed on your machine. I'll cover how you can install Minikube in this slide. You need to have an AWS account. Then also create a user with admin access on AWS IAM. AWS, you also need to have AWS CLI installed on your machine and QCTL in order to communicate to the Kubernetes cluster you create locally and also on Amazon EKS. So before we start, let's overview some of the major tools we'll use in this session. So first, we'll use Spring Boot. Spring Boot is a Java-based framework for building Spring-powered applications, production-grade applications and services with absolutely no need for more force. And how you can do that is because Spring Boot has abstracted so much configuration that ideally you do with Spring. And also, the opinionated approach makes you do stuff in a specific way and it gives you production-ready features like metrics, health checks, external configuration and has an embedded Tomcat server and abstracts the need for XML configuration that's normally you do in Spring. With Spring Boot's flexible packaging options, packaging into Java files, wire files, packaging and application into a container image, you can deploy your application on several basically anywhere, right? You can deploy on cloud platforms, vapor machines, brew machines and in this session, as the title says, we'll package into a container image and deploy on AWS ECR, then AWS EKS, Amazon Managed Kubernetes Service. The next mini-cube, basically in the production cluster, there will be multiple master and worker nodes across several virtual machines and during that require a lot of system resources, CPU resources, storage resources, which is not possible when working with Kubernetes on your local machine. So mini-cube abstracts all that and gives you all the master and node processes on one single cluster. That's why I say earlier, a one-node Kubernetes cluster on your machine or your Mac OS, your Linux and your Windows machine and focuses on helping application developers and new Kubernetes users, application developers to try and out-stop locally before going to production and also embodying new users and to try out Kubernetes in the journey. So you can get mini-cube from the docks, if you don't have it on your machine already. Basically, you can just Google how to install a mini-cube and you'll be able to get the name that I put in this slide. And next, the Amazon Elastic Container Registry, just like Docker Hub, GitHub Container Registry, Amazon ECR is a fully managed container registry that offers high performance hosting. So offers all those features, plus other features that Docker Hub and GitHub Container Registry and other Container Registry don't like limitable tags, image tags, we'll see how to make an image tag limitable along the line in this session, image scanning, in order to see security for security vulnerabilities, some other stuff that we would love to see get on a high level. And obviously, it's integration with other AWS products, other AWS tools. So the main stuff we'll do in this session will be on Amazon Managed Kubernetes Service, which is managed Kubernetes Service that abstracts all the effort required to create a Kubernetes cluster. You need to install and maintain your own Kubernetes control plane and give you all the advantage of using AWS scalability, reliability, all the AWS infrastructure and its integrations, as I talked about earlier. Don't work with EKS, rather, first of all provision a cluster, an EKS cluster and EKS automatically deploys a Kubernetes master or master, rather, and then we'll deploy a worker nodes. And in order to communicate to the Kubernetes cluster on EKS, we'll use Kube-CTL, which we also use to communicate to the one node cluster, which will be created by mini-kube, we'll use Kube-CTL, then we'll deploy our applications on EKS. It's demo time, all the commands, codes, LAMO configuration and some AWS specific skip steps, I will talk about along the line. You can find them on these public papers you treat on GitHub, in order to, let's say, cross-check in case you're having issues while working through this session. So let's get started with the demo. So you create a Spring Boot application. So ideally, you will create a Spring Boot application using the Spring initializer, the stats.spring.io. But for this session, this tutorial, we're going to create it with IntelliJ on the IntelliJ IDE. So click on new projects, then we'll name the project KCD Africa with a Spring Boot. Spring Boot, the application is going to be based on Java 17. And next, create the application. We'll select the dependency to use, which is Spring Boot, because we're going to create a RESTful API. We'll just wait for some time. Once everything is checked and our application has initialized, we then create a new Java class. So we're going to call it TestController. This will be RESTfulController to test our application. So we're going to annotate it with RESTfulController annotation. We're going to create just two endpoints. One will be the index endpoints. Then the other one will be just the name endpoints. So using the get mapping annotation, then we'll return each one, the header, which one header, hello world, which is what's most programming demos use. And for the second endpoints, to get endpoints, the name, to the Spring method, we're just going to use the printout menu, the kind of the way that's. So to run this application, we can, to build around this application, we can just click this third button on the IntelliJ IDE, on IntelliJ IDE, or on the terminal using maybe a wrapper, or this command, MVM, install, and use the application. Then to run the application, we can use this command, to run the JAL file. When it commands, each command gets the JAL file from this JAL file, KCD, Spring with the snapshots of the JAL file. And yeah, I'll Spring with application yesterday. On port 8080, we can test that and check if it works. We get hello world, the name, so our Spring with application works. So next, we'll dockerize this Spring with application. In this terminal on my machine, then to dockerize the application, we first of all created the docker file. So the other ways you can dockerize the Spring with application, you can use this. You can use maybe a wrapper, I use the build image plugin to build the application, to build an image. But yeah, we'll create the docker file. So let's touch, we can use the VI editor, or let's write the docker file by IntelliJ. So from the application, as I said earlier, uses OpenJDK17, so from the base image, from where the Spring boots, the application image will be built. Then copy, I'm going to copy targets. Then I'm going to copy the JAL file into the container. So we're going to copy the parts, the file name, the file name, then I'm going to write the entry points. Basically, this is where our application starts. After when is run, I see I'm writing the same command. We'll use the A, look at the command. The same command we'll use the A. I'm going to write it in the docker file to run the application, I'm going to suppose reports, 28A. When that is done, it can now build the image. So it can now build the image using the docker. It can now build the image. Okay, before that, let's just check that it is. I see the docker file. You can also see the docker file with vi editor. I did it with vi editor. Okay, run the base image. We'll be using the vi editor when creating the Kubernetes deployments and basically from here on, we'll be using the vi editor. Then build the image, docker build, build, we'll type the image, KCD, spring back. In time, the image will be built, then we can check image, you can see, spring with image. Then to test the image, run, we're going to tell, look at the ports, we want the image to run. I'm clicking from the ports, the image, yes, goes from the application. Then the image in KCD, okay, we can test that. You can see that our dockerized application is working. Docker image works. Then we can do that process. Next, we'll deploy this particular image onto mini cube. You know how to use mini cube? Automation mini cube is up and running. So basically mini cube starts. I'm using mini cube on a Mac or M1, Apple Silicon. So I don't think you get the same outputs on Linux or on Windows. So for us to deploy that particular image, ideally we have to deploy the image onto a container registry. So maybe docker hub, as you see in this demo, we're going to be using Amazon ECR. But for this, right, before we go on to deploy into ECR and for us not to deploy to docker hub, we're going to create a local registry, a local repository where we will deploy our image. So we can do that easily on docker, continue with the registry image. So basically the command will go like docker run, detach mode. We can start our registry to restart our name. Registry image. So as far as currently I don't have the registry to image locally, I'm just going to find the image and pull it. So we'll take a few seconds, we can maintain it. So yes, I have the image, I can do docker ps to check. That's currently running. You can see the registry image on ports 5000. So then next, right, we have to point the local registry to mini cube. So we point the local registry to mini cube. So when we deploy our application, it will be able to find the application. So we're going to use this command. Now after pointing mini cube to the local registry, you can get it. You then need to check if the point works. You'll notice that now you're seeing all the images in mini cube are no longer images on this particular session is showing images in mini cube and not images on your normal docker environment. So if you check here on the terminal of the springboard application, you see difference is not the same as this, right? So because of this, you need to rebuild the image. So you can do that with docker, docker build, target, case it is springboard image T, then build the image. So I've really done that like you can see here above. So next you need to tag this image that you built. You need to tag this image that has been built. So points to be able to push the image with local registry, you need to target with the, okay, to explain this better. So when you want to push images to docker, right? To docker or rather, you push images, you target the images, let's say this image targets, this is the springboard image. You target with your username. So this is my username of local accounts, then the repository you want to push the image to. And it gets pushed with the latest tag or you give a different tag. So now for this local registry, using the tag, you know that we'll have to tag the registry with local hosts, 5,000. So if you remember, registry is token is on ports 5,000. So this is where the registry is 5,000, then the tag of the registry. So you remember I said for docker, for docker host is username and the username of your docker account, local accounts, then the repository name. But for this, we'll give it the, we'll use the port name, local host 5,000, which is the port you're going to be listening on, then the repository name, and we should give it the latest tag. So the repository name we'll use for this, we can use the same repository name. We see in, when we are using Amazon EECRO, you see how we do that using, when we tag, you see when we tag the image with a copy with the repository URI. So when we get to that, you better understand what I'm saying. So tag the image, then the image has been tagged, then compute with docker images, then you can see, then we can now push this particular image in the repository with the latest tag, docker push, docker push. When I run docker push with the image tag, I noticed that it refused to connect. And this was because, if you notice I explained, we pointed Minicube to the local registry. So it's kind of deleted the image. This didn't happen before. So I have to recreate the image and it's currently running. So let's try to push it again. Okay, it's pushing now. Okay, now the Springhouse, we've got the application in the local registry. Next, we need to create a deployment, a component is deployment. So to do that, we'll create a deployment via the command language terminal without having to create a YAML file. We'll use the dry run client flag, which will create the entire YAML manifest template. So let's see how to do that. We'll create the Cube CTO, create deployment, and we'll give it a name. This is the Africa demo. Then the image, image will be localhost, then the dry run flag, dry run and all, and name the file, deployment. That's YAML. So now we've created a deployment for our file. We also want to put the service in the same file. So to do that, echo, right, we have a configuration to divide the file into two. Then we create the service, same way we created the deployment. Cube CTO, I'll drop copy the command, change the name of the service, Cube CTO Africa demo. Then we can check the deployment file. The service was created with target port 8080, and the deployment was created with the image of, the image, the docker image in our local registry. So now we can apply this deployment and to do that Cube CTO, then we created, we check the deployment was created successfully, Cube CTO, CTO, get call. So yeah, it's currently running. So check with the application, the application is currently running, it's currently running. Now you can get to see the port, Cube CTO, and see the port, it's currently running. So we don't have to test and see this application, we don't have to connect to the application, I've done the spin book application which has deployed. We have to expose, as we expose the port 8080 with the service, we cannot access it externally on, let's say if we go to browser and then port 8080, you'll be able to access the application. In order to access the application, we have to create an SSH tunnel. So create an SSH tunnel using port forward. So to do that Cube CTO, Cube CTO, port forward, SSH service, then name of our service, case is the Africa demo, then the port services currently listening to, and the port wants to listen to on our local machine. Then you will open the browser, we're able to see the application has been successfully deployed on Kubernetes and Minitube. Next, we'll deploy the application to push the image that we created to Amazon elastic container registry, which we'll now use to deploy the application on Amazon EKS. Before you can push your container image to Amazon ECR, you need to make sure that you have AWS CLI installed on your terminal. So if you run, you write the command AWS CLI, you don't get any response in that it is, then you have to like install AWS CLI. So we're not going to install in this session because I already have it installed, or you can just Google and you'll be able to get the information to install for a particular machine. So after that, I don't know if you recall, I made mention of how creating an IAM user for this particular session with admin access. So instead of using the root user of your AWS account, so you should have also done that. Then next need to authenticate and configure your CLI. So when you have to configure your CLI, you will do AWS configure, configure your CLI to the particular local the IAM account, user account. So AWS configure and you'll be prompted with some outputs to fill in your access key for that particular account and the secret access key, access key idea, secret access key. So I've already done that. We have to test that your configuration actually works. For me, I will just read AWS configure on the list. So I can see that I've already configured the access key, the secret key and religion. US is one. So after that, in order to push to Amazon ECR, I need to first of all create a repository. So just like the registry you created, right? When you're deploying to Minikube, just like Docker Hub, when you're deploying to Docker Hub, you create a repository under your account. I need to create a repository on ECR. So create a repository out of this command. So I've copied and pasted the command. So to expand the command. So yes, this is the AWS CLI and ECR and create repository and the flag is repository name. So this I copy this command. So let me edit repository name. KCD Africa. KCD Africa got demo repository, demo repo. Then you remember when I talked about the relevant security, there are security features that AWS offers. That's some other contrarial issue you don't offer, like image tag immutability. So yeah, we're telling ECR to make the image tag immutable. So when we create the image under our repository and when we push the image to the repository, that particular version of the image, the tag if it's latest, it will be unchangeable. That's what image tag immutability is. If it's a test, whatever we name it as, we'll get to see that. And also image scanning configurations, scan and push. So this now tells ECR the scan, like I mentioned about the security features of ECR, the scan image for any security vulnerabilities. So click enter to create the repository. You should receive a JSON. Once you run the command, you should receive a JSON object like a response of the repository ID name, the repository URI, and it is created. So in order to check that this repository was actually created, we can just go to your AWS accounts. ECR accounts will be logged in. So I already have a demo repository, sorry, Refresh. Not that Refresh, I can see. KCD Africa demo repository has been created. The repository URI, then if you check in, the repository will see there's no image. So next, I'm going to push our image to this particular repository. Before you can push the image to this repository, that's what we need to log in to the repository. So we do that by, okay, first, I'll copy the repository URI, then we then get the compare token and pass it through the ECR API to log in. AWS ECR, get login password. So we get login password, so it gets create a new protocol, region, Docker login, can use username or this new username. AWS. So this command, standard password, standard input, this flag right here is what gets the value of the password that generated by the ECR API, the login password to the particular region where your repository is. Then we now add the repository URI that we copied. So yes, then you get the login succeeded. Then I successfully logged into the KCD demo repository. Africa, the KCD Africa demo repository. Then next, we need to tag this particular image, the image that we built. If you remember the image that we built, Docker images, KCD is Springboard demo image, Springboard image, we need to tag the image with the username, the image name and tag. So to do that, just like we tagged before pushing to the local registry. So we'll first of all, Docker tag, the repository name is source tag. Then you see how you arrive. So this image gets pushed and is given a tag of leftist, because we didn't specify it here. So it gets tagged with tag of latest. We'll see that. So after tagging, we'll talk about images, for the repository name, KCD Africa demo repository, then we'll tag latest. So now after tagging this repository next, we need to push the, after tagging the image, we need to push the Docker image to ECR. So we've already logged in to the repository, the repository ECR. Then to do that, our normal Docker push, then we'll use this, push the image, the repository name. So we can push it in the test tag. We can push using default state test tag, preparing the tickets sometime, depending on how fast your internet is. The mind is looking like it's pretty slow. So that really took some time, it's a good thing I'm recording this session. Now, so my image has been pushed, the demo repository, KCD Africa demo repository if you can check. Now see the image, I see all the details of the image. Next, you then walk with ECRs to deploy the application. So walk with Amazon ECRs, that is to deploy the image, we have currently have Amazon ECRs, so we can manage this course on ECRs, we'll move the Amazon ECR like through. So I got to have the ECR like through installed, so let me do that with the command ECRs CTO version, so I can see the version of ECRs. So in order for the version of ECRs I have, in order for you to install it, you can just Google and take it to the ECR documentation. So as we have got some ECRs now, now we have ECR CLI, we will now need to deploy create the cluster and then deploy create the deployment and the service on the cluster. So before that, when you've gotten the ECR CLI through on your machine, you can now get on to deploying the image on creating a Kubernetes cluster and deploying the image on Kubernetes cluster. So before that, let's just overview creating Kubernetes clusters on ECRs. So an Amazon ECRs cluster consists of two primary components, the control plane and work on those, the control plane. So the work on machine which are part of the Kubernetes cluster is also called work on nodes. So these work on nodes, like I said earlier, I remember the slides, you deploy the work on nodes yourself as ECR provisions in master nodes for you, ECR deployed the master nodes for you upon question of the cluster. So next, before we can do that, there are some other considerations that are let's say peculiar, somewhat peculiar to Amazon ECRs. So what you need to consider when using Amazon ECRs. So we have VP, VETA private cloud configuration, security and coop configuration. So a lot of time, you can configure an ECR cluster, need to specify subnets or VPC subnets, VETA private cloud subnets required for the cluster to be hostating. And now, right, you can create a private subnet, you can create a public subnet. In this particular session, you can create a private subnet and we create, we use a private subnet and yeah, this is to maintain high availability and we are required to put at least two abilities to those who see how to do that. Also, for security considerations, when the worker nodes are deployed, in order to configure, like ECRs, automatically configures communication between the worker nodes and the control plane and constructed in a way that communication are like for privileged ports, right? So in order to like, access our application via an external API that will be generated when we deploy the application and also other access, let's say you want to implement aggressive value balancing implement DNS and some other features, you will need to add an additional inbound rule or outbound rule depending on the specification you want to create. So for this particular session, we're just going to, when we work on security group, we're just going to create an inbound group where the particular ports for which we're listening to will be able to access our application on the web. So here are the guarantees. To create a cluster, so we'll create a YAML file. So we're going to use, like I said earlier, we'll use the VI editor from here on. So VI, okay, touch first, we create the YAML file, touch. EK, okay, cluster. So it's YAML. Then to edit cluster. So it's YAML. Okay. Then I've already written the cluster configuration for my cluster, it's my cluster subnet and to create the work hangout. So let me copy that and paste it here. Don't worry, I'll respond to this. So if you look from the beginning, you see the API version, the kind which is cluster configuration, the name, metadata, then the VPC, the virtual private cloud subnet. So I set it as private subnet. So these are subnets, peculiar to my, the IAM user I created here. IAM user I created and I've been using throughout in this session. And these node groups here, node groups here, Amazon EK has a feature called node groups, right? Manage node groups feature that will make the provision in a life cycle of life cycle management of nodes, which will be on meet that EK instances for each Amazon, each EKS connected cluster. So the node groups are being updated to, the nodes will be updated to the version of Kubernetes. So like, basically when we talked about, when I talked about EKS, handling the deployments and containers and some of that stuff you do normally, kind of like a self-managed Kubernetes cluster. So these node groups managing the nodes for you. So when I get here, we have what kind of nodes and they are of type three, the IES task type is type three. So before that, let's see how to get the subnet. How to get the subnet on your AWS account. So yeah, on your AWS account, you can just search VPC or you can search subnets. So you can see subnets, top feature subnets. Now you'll be able to see some subnet ID that submit ID of a particular AWS account. And you see the subnets ID have the availability zone. So I'm on EUSD square regions, is one A, one A, one B, just like you can see in the configuration. So yeah, then that's that for the subnet. So back to the node groups. So the node group of instant type T, three, small, and this is basically because, so I'm looking at the algorithms on documentation, right? And this particular instance is basically is an affordable option for run code repositories and test environment. So this is low cost, both stable CPU performance instance, right? That just for, okay, it's not something you use when you want to build an application that has, that will see, let's say, thousands or hundreds of thousands of like real-time users at a particular time. So this is just a small instance for this specific demo. So yeah, that's that. And you have the another node group build out for building the containers, images, and pulling and building images. So we can then save this to just be filled out. Let's drop that demo, okay, that's it. Yeah, then to create the cluster, we use the EKS ETL, so this is the EKS command line two, ETL, so create cluster, cluster ETL. So it's going to create some, using the environment, put it in the cluster. In the particular region. So this will take some time. So this will take some time to create and deploy the worker nodes. So I'll see you when it's done. That really took a while. So we can see that the cluster has been created in, the KCD Africa cluster is created in the new region and it's ready. And also on the AWS console, EKS can see that the cluster is ready. So if you click on it, you can see nothing as there's no resource, yes. Okay, the resource is here. The IDU Kubernetes resources and the nodes have been deployed. You can see them here. The nodes with the T3 instance, T3 small instance, then also, right? So if you remember at this step, so when working with, when it is there, they're working with EKS, connect EKS, right? So we'll skip that step because EKS, EKS, CLI2 has already done that for us. So you look through here, you see, saved cube config configuration as, so it has saved it in my cube configuration file. So, and all the resources have been created. So I don't mean now if I use kubectl and let's say run kubectl, kubectl get service as you see. Clicking it well. I would see the cluster IP, right? So this is the Kubernetes cluster, the service, the default service in the Kubernetes cluster and EKS. So next, we can also run kubectl get like a wider view of pod, kubectl will get pod for the same species, same species, oh, right. Okay, so we can see all the nodes. Keep system, all the different, all are connected, so our cluster is fine. Then next, we need to create the deployment as I just indicated. Step forward to create the deployment to deploy our unit. So as we did for the cluster, YAML file, I'll create a new file, EKS deployments, you know why I'm only making this, it's being a touch. So EKS, touch, EKS deployments.yaml. Okay, then open the file in VS editor, so deployments, okay. Then, so I already written down the configuration, YAML configuration, I'll just copy it and explain. So, yeah, of kind deployments, of kind deployments, the API version. So the name, KCD Africa, demo deployments, KCD Africa demo deployments, it creates two replica sets, right? Two replicas. So when we create these deployments, we apply these deployments, we create two pods, and the name, and the container, this name of the container, and you can see here, you have to place order for image URI. So we'll get the URI of this specific image we deployed on the specific image card we deployed on EKS. So if you have like, you've deployed new versions of the image, right? You can change the image stuff. So let's get the ECR image URIs here on this. So we have the, yeah, and the image, you can see here, copy image URI, so image URI has a copy, you come. So I'm using the VI editor here because it's, I've been using the VI editor mostly because of what you use as a default engineer, but you can use Nano, create an editor file, you can also use the text editor, depends on what you like. So this is URI, so this is our demo, the demo repository on the ECR, the demo repository we created, and this is the image type. So it detects the latest image, this particular image, and it will build the query deployments and this image. Then I can type the text, and we'll save this file. Okay, then we can make sure that we've saved EPS deployment, the demo, so you can see, and also I forgot to mention the port is listening to, the container port is listening, you remember when we created the container, we have stored port 8080, so we can listen to this on port 8080, so yeah, when the image is running, you say, I mean running in the container, so yeah, then let's apply this deployment, so we'll call CTO, CTO, apply, okay, deployment, deployments, deployment file, wait, okay, so the deployment will be on now, the deployment has been created successfully, and we can check with CTO, get deployments, deployments, yeah, the deployment has been created, two, you know I mentioned earlier, because the two ports have been created on the query running, then next we need to export our deployment to other members, all the members in the cross-card, you can load port service, so to create the service, you'll keep CTO, okay, we'll press credit file, click touch, you can do this, you can touch again, touch, equals, service, dot, yeah, no, yes, then we are able to edit CTO, oh, okay, service, okay, then like all the files we created already have the configuration, demo configuration, so all kinds of service, the name, the type, this value, you know, for service, for all the nodes committed, and you define the port, the node port, and for external traffic, so what opening is, the script port for external traffic, and the port is, three, one, four, seven, nine, so while we get to accessing the API, we are, we can see in the steps, the steps, as I said, the application via an external API base, we'll see how we go take security group in the configuration, we'll do the group, which are okay in the configuration, to then create an inbound route, in order to allow traffic, so, yeah, then, in this is the port of the service, because some members of this port, then target port, where the container is actually running, so, yeah, then we need to, we save this, okay, then it should be clear that it's also, within, within, maybe, okay, yeah, then we apply, 50QL, with, 50QL, apply, FCKS, FB, dot yamo, yeah, yeah, FB dot yamo, then, yeah, the service has been created, and let's loop, that's everything works fine, and let's get, why build the code, even 50QL, get boards, all right, so, the boards are running, then, get nodes, they're currently all nodes are ready, and, now, we can talk, we can, now we can access our application, where one of these IP addresses, these external IP addresses, so to do that, I'm not the IP address now, so to do that, we need to create a, a rule, an inbound rule, in the security group, let's create an inbound rule in the security group, so let's set security, security group, so yeah, we're looking for the security group that's specific, oh, we already have it here, so I've already created the security group, so you need to look for the security group that enables communication between control plane, and worker nodes, in group ng1, workers, so when you create that, when you find that particular security group, you go, and you create an inbound rule, so I've already created an inbound rule that is listening to the port 3.1, the node port 3.1479 for external traffic, so you just have to show how you create an inbound rule, yes, yeah, so it's the inbound rule, so there's no new security that we already created that, so now you can actually assess your application from anywhere, from any browser, with the IP address, so we've got that, the IP address, so you can access application-less copies IP address, and the port 3.1479, oh, add it as a Google search, but 3.1479, oh, sorry, error 3.15, 3.1479, yeah, there, hello, then to test the name endpoints, my name is DivineOdazier, so you followed me and worked through creating a screenboot application to create a screenboot application, deploying the application locally in one node cluster, mini-cube, also, we pushed the Dockerized image of the application to Amazon Elastic Container Registry, created the Kubernetes cluster on EKS, and deployed the application on the Kubernetes cluster, and finally, using the EKS feature, we were able to access the application with an external IP address, so, that's that, thank you for listening to this session and for following along, you can follow me on Twitter, this is my ads, Twitter, DivineOdazier, I underscore Odazier, and DivineOdazier is also my name on LinkedIn, I hope you enjoyed this session, and this is, all this was recorded, I don't know if you noticed, so I'll be live, in case you have any question, and I'll be live to answer all your questions, regarding this particular session. Thank you, and I hope now you know how to deploy your screenboot application in Amazon Managed Kubernetes Service. Yeah, do enjoy the rest of the KCD Africa 2022. That was a great demo from Divine, and if we're looking through the chat, Divine was right there to answer most of the questions, but that was a really informative one right there. Also, okay, let's wait for Divine to join the backstage, but I think he was in the backstage before he dropped off, so he could answer some more dark questions. Live. Okay, so let's, we still have some time within Divine's slot, so let's give him a few minutes while he joins. I think he's here. So let's put him on. Can you guys hear me? Yes, we can hear you clearly. Welcome. Sure. Yeah, no worries. Internet happens to everyone. It's like, if it doesn't happen, we have to do a special sacrifice to the gods for maintaining the internet. Demo gods, yeah. Okay, yeah, welcome. So should I, do I need to introduce myself? Yeah, if you can give us more introductions or any more context around the talk, you might not have, you might have remembered after, after you got to listen to your conversations with people in chat. Okay, so like, it's something I figured out, right? Or noticed, right? There were loads and loads. So regarding the M1 chipset, so initially when I built the Docker image, it was on the ARM64 architecture, when I pushed the image to ECRO, when after creating, applying the deployment, like I was stuck for like five hours trying to resolve because the image was not like building, it was not running. So like I noticed that, like I then remembered, wow, I'm using the M1 chipset and my nodes are based on AMD64. So I had to like set up a VM with Colima, which is a container runtime for Linux on macOS. And then after I did that, I had to then build the image on the VM to be on AMD64 before pushing it to ECRO and then. So like it was just, the M1 chip has just been like making my life a lot harder as doing DevOps stuff. So yeah, I think that's why I kind of covered it, but I had to like remove it in order to make the talk a little bit, to meet the time slots here. Awesome. Okay, I think we have a couple of questions here. Anita, I can't find them, can you? Yeah, isn't that the question Devine answered earlier? Oh, okay, yeah. I think there are some others. Okay, I found them. Yeah, Devine, can you explain more about the local repository you created? Okay, sorry, I was on mute. Okay, so the local repository I created is just like, it's not something you do in a production environment, right? It's just for testing purpose. So it's just, instead of like the demo, I had to focus on ECR, Amazon, instead of pushing to Docker Hub. So in order to just make things faster, I used the registry image. The registry image, the animated on Docker Hub, and I used the image to create a local repository for that particular session. So I need to push my Docker image into the repository in order to use it on Minikube. So yeah. Awesome. Okay, we have a question in the chat here. It says, why do you use, okay, did you not use? Okay, why did you not use Kubernetes? Okay, I think he asked the question and corrected it. Why did you not use Kubernetes? Yes. Kind of confused that I used Kubernetes, the critical Kubernetes cluster. So I don't know if I can see his question. Okay, it's in the YouTube chat. Emmanuel, if I didn't pronounce your name correctly, if you can ask again, I'll refresh the question. Okay, I think the other question we'll have is, what if I built this image on an ARM architecture we need to run on the EKS cluster? So depending on how you provision, you deploy your nodes, right? So I don't know, I kind of missed, talked about that a bit earlier when I joined the call. So you have to make sure that you have no drones on the same architecture as your image. If not, it won't be, it will deploy, right? But it won't run. You get like a crash loop backup status, which will lead to an error. It's to just keep restarting. It reports to keep restarting, restarting all the new crashes. Oh, okay. So you have to make sure your cluster is the same, your nodes are the same architecture with your images. Oh, okay, awesome. Yeah, I think that's all the questions we have. Audience, if you have any more questions, divine is here before he leaves. Okay, I think his accent, why did you use mini cube? Okay, so I use mini cube. So kind of like, I started from scratch, right? We didn't need application locally, this spring, but application. And I use mini cube just kind of to show and explain, like, okay, some people may be watching the, for the first time you're using Kubernetes. They don't have access to ECR, like to AWS. So like just like try it out and test out initially. That's why I used mini cube. Okay, well, Emanuel, I hope that answers your question. Although Emanuel says he understands you now because of Lima. Okay, yeah, I think he's talking about the ARM issue, the architecture issue. Great. So if you have any other questions for divine, you can still drop it in the chat while he's here. Okay, I think we don't have any more questions. Thank you very much, divine. And we look forward to seeing more of your contributions in the KCD ecosystem. Thank you. I look forward to contribute more. Okay, awesome. Bye.