 So I think we can move on to the next session, Anita. Yes, sure. Okay, so we have our speaker here ready to go. So the next speaker is going to be taking up the topic, leveraging AWS Fargate for running cost-friendly and containerized applications. And this is going to be led by one victory, one victory works remotely as a software engineer for Outland, certification on AWS and Microsoft technical articles on cloud services and to live for developer-focused organizations, such as Octa, DigitalOcean, and Stepzen. Over to you, one victory. I hope I pronounced that right. Yeah, you did, you did. Mike, can anyone hear me? Yes, we can. All right, so today I'll be talking about AWS Fargate and how we can leverage it for running cost-friendly containerized applications. Thanks for the opportunity, pretty much excited. All right, so first off, this talk is pretty much important because minimal operating expenditure is often the goal for every startup, every small project and every solo developer having a minimal operating expenditure is something that they look forward to and they work towards. And just to define that first, operating expenditure refers to the money that you pay to run, to use a service while running your application. So on the cloud, if you're using any service, the money you're paying for it is being referred to as your operating expenditure. And the reason why we look forward to a minimal operating expenditure is to avoid the huge cloud bills. So everybody knows that the cloud is expensive. These services are billed heavily for, you pay lots and that is why they make sure that you have a minimal operating expenditure because you do not want to get billed heavily at the end of the month. And so today, within the next few minutes, I'll be speaking on how Fargate helps developers who are interested in running container applications have very minimal operating expenditure. And okay, so before I move into what Fargate is, I'd first of all like to take a step back and talk about containers because Fargate is a service for running containers. So first off, containers are everywhere. We all use containers. Either we use it directly in running our applications or we just indirectly. For those who work with services such as Heroku, Netlify, those are Plasma as a services. Underneath, they use containers also in running the applications. And over the years, containers have proven to be very, very beneficial for software development. It's literally everywhere. We also have tools that are used by millions of developers for running containers. And in comparison to virtual machines which we use years for running applications, containers are lightweight. They are cloud native because they fit well into the cloud architecture. And with that, they are very scalable. They are used for running microservices, running very large applications. And they are cheaper to run because unlike virtual machines, we can have several containers running within a single virtual machine. They are platform agnostic also. So you can have a single container run across multiple cloud platforms. But there is an issue with containers. And that is, containers do not run on thin air. These containers, they run somewhere either locally on your computer while you're building your application or on the cloud using either your on-prem service or a full cloud provider like AWS, Microsoft, or the Google Cloud platform. And so there's always an infrastructure that is powering up. And this introduces some problems. First off, this infrastructure has to be created. It has to be updated. And when there's an issue, it has to be patched alongside being monitored constantly to know when to roll out an update to your underlying infrastructure that powers your containers. And also this infrastructure needs to grow as your application scales, as you scale from thousands to millions of requests. The underlying infrastructure for your containers needs to grow also. And this means that you're paying more and you're also having toy. From an SRI standpoint, toy refers to the manual work that you do repeatedly. So as your underlying infrastructure is growing, there are several things that you need to do. Probably if you spin up a new server, you have to install certain tools on them before you can run your containers on them. So these are problems that people face before they can use containers. And this is where Fargate comes in. So what exactly is Fargate? Well, Fargate is a computing service on the AWS. And the major purpose, the main purpose for Fargate is that it provides the right compute capacity for your cluster on AWS. Fargate introduces the serverless computing model as Fargate manages and takes care of your compute infrastructure. So generally, we know that in the serverless model, you are basically using a third party service to run your application. But you have no idea of that service. All you do is just put your application there, it runs it, it takes care of it, and you probably pay to use it. And this is the same ideology that Fargate brings in to containers. So AWS is the one that manages the underlying infrastructure for your containers. And just to be clear here, when I say underlying infrastructure, I mean the compute instances. On AWS, compute instances are the EC2 virtual machines that you spin up or that is being spined up for you if you use some cloud formation to create a cluster. So Fargate is currently available for the Elastic Container Service and the Elastic Kubernetes Service, the EKS. So one thing to take note of is that Fargate is not a new service. It's been around since 2017, when it was first released, and two years later in 2019, they released support for the EKS. So first it's rolled out with the ECS, then EKS came along. Then when using Fargate, if you want to know what exactly is going on, you have the integration with CloudWatch, the usage metrics there where you can see what is being used, how long it has been used, and you can also set up alarms to ensure that you do not overspend or you do not exceed certain thresholds and limits. So now back to the question, is Fargate really cost-friendly? I've seen a lot of arguments about this online, and all I would say is it depends largely on your use case. It depends on how you plan to use Fargate. Some people have come up to say that, oh, Fargate is expensive, but one thing I would like everyone to keep in mind when you think about this is that cost is not only the amount you pay, cost also includes the labor and time you spend in running your clusters. So if you are spending a whole lot of time managing these clusters manually, you can also count that as your cost because it's more like a time and labor cost. So in the long run, Fargate pays off because it reduces the time that you spend in managing this infrastructure. Other than you being the one to manage them, why not just ship it off to AWS to manage it for you? Why you pay them to money for it? Then also, if you want to reduce your cost, you also have the Fargate support for full-tolerant applications, although this support is only available for the ECS, not EKS. So if you are familiar with AWS, we have sports instances which allows you to use spare compute instances. They are given to you, but these can be shut down at any time and that's why there's a catch here. Fargate sports would only be used for full-tolerant applications because it could be shut down within a two-minute notice period. Now estimating runtime costs. The AWS pricing calculator allows us to estimate the cost for running a particular service over a period of time. So I'd like to do a quick demo here on how much it would cost to use a service where the EKS and the EKS alone then we proceed to use it with Fargate. First, I would do this with the EKS only. Okay. So for the EKS, the build is being generated based on the amount of cost as that you have running. And this is what I spoke about. Helio, can everyone hear me? Yes, we can hear you. Hello, Wanya, are you there? Yeah, since I went up for this. Okay, so for using the EKS, you're being built based on the cost as that you have running. And you can look at more details on the calculation here. We have the money being generated here and this is justice. So for it, it's like there's a whole lot of things being abstracted for you and AWS just gives you a single bill for you to pay. Now let's take a look at Fargate, right? So now you would notice that there are a lot more fees here on Fargate because Fargate gives you the flexibility to pay for what you request. You can see we have the operating system. The total cost here is being generated based on what we have here. So if I were to increase the amount of the virtual CPUs here and also the number of tax reports for an estimate of five and you can see the pricing has gone up and that is just to demonstrate that you're paying exactly what you request for. It's not like when you're using the EKS and doing where the EKS gives you a lot of things so AWS is just giving you a single bill. But with Fargate, your paying just what you request for and what you're being to making yourself. So back to my slides. So who should use Fargate? Well, the answer is anyone who is bothered about the compute infrastructure for your containers. If you're fine with taking care of your containers yourself or if you're fine with having to do all those manual work I would say you should keep on using the EKS but if you feel tired having to manage the line infrastructure for your clusters I would tell you to use Fargate. If you're wondering who are the leaders of Fargate there are three success stories on the AWS documentation for Fargate. We have some Samsung, Koala and Vanguard who migrated their very large workloads into Fargate. But before you move to Fargate there are several constraints that should keep in mind although they're not much they are relating to networking mostly. Okay, so the current and there are 18 constraints to keep in mind before migrating your workloads into Fargate. Some of these notable constraints are first demon sets are not supported for Fargate. So if you're making use of demon sets you should think twice before moving to Fargate or you can make use of stickers for your cluster. Then two, each pod has some level of isolation as a run within each virtual machine. So each of the pods in your containers all run in a separate virtual machine but these virtual machines are managed by AWS, not you. Then privileged containers and also the pods are available within private subnet. Then the fifth one which I'll talk more on later is that the network load balancers the load balancers that are exposing your application can only make use of IP targets. At the later parts of this talk I will discuss more on this but for now I'll just move forward to how does Fargate work with the EKS. So the way you configure Fargate for EKS is through a profile. A profile is a configuration that has several fields that configure how Fargate works. Within a single cluster you can have several profiles across different environments or namespaces. For example, you could have a profile for the dev section environments and each profile consists of a namespace, a selector and some optional labels. Now this is great because you can have pods within several profiles. The concept of bringing in profiles is for you to be able to organize the pods within your cluster. So if you're like a development pod you can, sorry, if you're having a development profile you can have certain pods within that profile. If you have that for staging you can have the pods within that also. So having several profiles allows you to organize the pods within your cluster. And there are maximum of five selectors within each profile. So the use of these selectors are to organize the profiles. Just, sorry, the use of each selectors are to organize the pods, just like I've said. So when you create a new pod there is a selector attached to it. If you want the pod to be within dev you add the selector for that. So there's like a system where you can organize these things by yourself. Then the scheduling of the cluster pods into Fargate is made through the use of Kubernetes controllers. For those that are familiar with Kubernetes you know that controllers watch the state of the cluster. So as you create a new pod within that cluster it checks if the selectors match between the profile and the pod you're creating. If it's matched it pushes it into that profile that you've created all within Fargate. So I have a mini demo here. I couldn't do a full demo because it takes a lot of time to create a cluster. As we all know. So using Fargate with the elastic Kubernetes service is made possible through the EKS C2. This tool is used generally for managing everything that relates to the EKS. And it also has support for Fargate. Welcome back, Victorine. Can you hear me? Yes, clearly, welcome back. Yeah, apologies. I had a rain fall earlier on. All right, so I stopped using Fargate for the EKS. And the EKS C2 is what we use to create the Fargate profile which runs the pods. So below here is an example image of the Fargate command being used to create the Fargate profile. And one of the things that I love about Fargate is how flexible it is. So you can decide to create the profile at the same time that create the cluster or you can decide to do this before creating the cluster. Oh, sorry. You can decide to create the profile at the point where you create the cluster or you can create the cluster first, then create the profiles later on. But something to note that is that you must use the Fargate flag when creating the cluster because by default, not all EKS clusters have support for Fargate. So at the point where you create a cluster on the EKS, you must specify the Fargate flag so it can create that cluster as a Fargate supported cluster. So the first step to using Fargate is to have the Docker image because the image is your workload, the application that or service you want to run on Fargate. So this is like a preliminary step before creating the Fargate cluster, having the Docker image, then pushing it to the ECR, which is the elastic container registry. When you create your workload, you specify the URL to the ECR repository in the deployment service. So if you are new to this, there are several help commands within the AWS console that will show you how to do this. An example of it is shown here within this image. So actually using Fargate, the first step is to create the EKS cluster. And like I said earlier on, you have to specify the Fargate flag, otherwise it would be created as a normal cluster without the Fargate support. So you execute the EKS CTL creates command, then you specify the Fargate flag attached to that command. Underneath EKS CTL uses CloudFormation with some templates to create the cluster. And it takes a lot of time. I have to mention that if you want to create a cluster, you probably want to do something else on the side because it would take about 20 to 30 minutes roughly before the cluster is fully created. So if I were to do this within this talk, it would take the entire time. Let's assume 30 minutes have passed. We now have a cluster ready. You can view the components of your cluster in the CloudFormation page of the AWS console. We would see the EKS CTL attached to the name of the cluster that you created. Here, a random name was generated because I did not specify the name. If I had specified that name, you'd have seen it here. Here, a beautiful unicorn is, random name was generated for the cluster. Now the next is to create the profile. So the image on the left side here shows the field for a profile. The profile is created as a cluster config type. Then we have the metadata, which consists of the name, which is here, I use Fargate Fastify cluster. And basically within this demo, we'll be imagine that I'm trying to push a Fastify application to the EKS. So Fastify is a Node.js framework for creating servers, for creating backend applications. Pretty much similar to Express.js. That sounds familiar. So within this section here, within the Fargate profile field, I have two profiles being specified. First is the Fastify default profile, with a namespace selector. Then the second one is the Fastify proge profile. So like I said earlier on within this talk, in a single cluster, you can have several profiles for several environments that you have within your application. Here I have the default environments, which is dev, and I have the second one, which is prod. Then the next is to create a deployment resource. On the image, the snippets here on the right is a sample deployment resource field. First we have the name, the namespace, the libels, then the selectors. This would contain two pods. Then down below, I also have the image repo URI. So if you're doing this yourself, you'd have to change that placeholder to the URI of your image within the ECR. Then this is the next interesting part of this talk, which is exposing the application to the internet. So workloads within Fargate supported clusters are exposed through the use of a load balancer. And this is where one of the constraints of Fargate is coming, which is the fifth that I mentioned earlier on. And that is you can only use IP targets with network load balancers and application load balancers. And this is so because you do not have control over the instances. Remember, we said Fargate is used when you want AWS to manage the instances for you. So you cannot use instance targets for Fargate because those instances are not being managed by you, they're being managed by AWS. And to do this, you need to have the AWS load balancer controller installed. And that will be done using Helm. Documentation page contains several steps to have that installed. They are quite lengthy, so I could not put them here. But the installation guide is very thorough. It would guide you through the process of installing the load balancer controller into your cluster. After doing that, when you show you have the load balancer controller installed, you can now proceed to create the load balancer. This is very similar to the load balancers that you might have created in the past if you use EKS in those. But there's a major difference here. And that is it has some annotations which specify the IP target type. You can see three annotations there. And after that we have the selector which matches with the previous ones. Then the ports that you want to expose which is port 5050. That's where the Fastify application would be running. And that's the last step. After this, when you apply all these configurations, you would have a Fargate cluster that runs and that generates very minimal build for you because you're only using what you request for. Unlike the regular clusters I would probably have two instances with it. This would probably have just one for I do. Then as your application grows as you create more ports, those instances would be created automatically by AWS. Then when your demand reduces, they'd also be scaled down for you to ensure that your costs remains minimal and you do not exceed the limits. So for that resources on Fargate, we have the Fargate documentation. Then there's a guide written by one of the AWS solutions architects on Kubernetes to the cloud. Then we also have a guide that talks about the comparison between the EKS and the regular, between EKS supported with Fargate and regular EKS, which you can go through in your spare time. And that's the end of my presentation. I don't know if there's any question from anyone. Can anyone hear me? Yes, I can hear you. That was an awesome presentation. I agree. I feel messed up my presentation. It's fine. I think we had a similar issue yesterday. Please? I'll start to lose this morning. Okay. Yeah, no worries, it happens. Any question anyone, if you have questions later or you are watching this later on YouTube and you want to ask Victor questions, you can reach out to him on Twitter. I am on one on Twitter. Let me also display the name. So it shows, yeah. Okay, is this 01 that is added or which one is incorrect? Let me fix it. There's a 01 at the end of it. Oh, okay. So if you have questions later, we are watching this on YouTube later. You have questions, you can reach out to him on Twitter. Awesome presentation by the way, Victor. Thanks.