 Mi je Pavel. Mi je Poblant. Značim v Tailandu. Značim v devobskoj kompanii. Značim v Singapur. Srednjič smo operati v 3 inženej, pripovoditi devobsko in klasične service in klasične. Vzlušam, kako se solucije vzlušim v AWS v KOS pipeline. in zelo smo je narediti na AWS ECS. If you'd like to contact with me, you can find me on LinkedIn, finding my name and surname. But before we start, I'd like to share with you with a motto. This is my first meetup and first lecture, so forgive me if I have a little bit of stress, but I think it's always good to share with something that will stay in your mind. And this is something that is very close to me, it's close to my mind and my heart. Continuous improvement is better than delayed perfection. And this is the culture that we are trying to implement and to evangelize with our clients, that it's always about the quickest feedback, about the quickest improvements, and it doesn't matter if we make any mistakes, it doesn't matter if we fail as long as we take the good thing from it, as long as we learn from it. So, why I decided to present this solution? We want to use one DNS record for multiple regions. We're gonna have application load balancer that will be serving the traffic to our target group, instead with tasks in container service. We want to have high availability and be fault tolerant. So, if one of the regions is going down, we're gonna still be able to have our application up. It doesn't matter if this load balancer will be serving frontend, or it will be our API or any other backend connected to databases. Also, we are using Terraform because we want to have consistent infrastructure between regions and environments. Plus, we will have a control of deployment. We can decide, for example, to which region we want to deploy first, or do we want to deploy both at the same time. So, in case of we are deploying maybe to the region that is less important for us and it will fail, we will have less stress and we'll do the rollback, right? So, the clients, the users, they will not observe any impact of it. This is a short diagram. I'm not a specialist in building slides, so this is the last slide and we're gonna jump into the web console and to the code. So, the idea of it is that we're gonna have a source. In our case, this is code commit, but as probably you know, whatever, right? We can connect code pipeline with GitHub, with Bitbucket or do any other custom synchronization, but in our case, we're gonna have our back-end code in Python in code commit. So, whatever there is a match to the master branch, this will use an Amazon event to trigger the pipeline which will do the build of the container image, then we will push it to the container registry and as the next step, the code pipeline will perform deployment and we have two additional accounts because what I was describing so far is happening in one account. We call it, let's say, share account, DevOps account and then we have this connection, this trust relationship between our test account and production account. There's another feature that I used here, it's resource access manager. So, I have created two VPCs in both regions in the shared account and I shared them to the both accounts, right? So, you can see that the test account is using AP Sophist One and EU Central One and this VPC is coming from the shared account, the same with the production account. Okay, let's go to the browser. So, the pipeline, it looks quite simple. We have the source code and I have triggered it just before we started the meetup. We performed the build with AWS code build. It was a simple Docker build and Docker push to the container registry and this is where the magic is coming for me because when we go to the code pipeline and you want to create a code pipeline from the browser, we can go to the deploy stage and we can choose ECS, we can choose multiple regions but we cannot choose a different account. This is not possible from the browser but we can do it with an API code or we can use it Terraform or CloudFormation for that. So, that's the tip that I wanted to share with you that we cannot use it from the browser. So, as you can see 20 minutes ago we have deployed a version of our code. I will show you how does it look in the browser. It's very simple. We are just returning a JSON. It's a Python class application. So, we have greetings from the account starting with 0.5. This is our test account. The region is here central because currently I'm connected to the VPN to Poland and this is the new version during AWS Meetup. On production we still have the previous version of the code but we have only this is another version but you can see that there is a different account here. So, I will just trigger and we have a manual approval here which is up to us, right? Depends on the application if we have a lot of automated tests and we are pretty sure that if everything is going well on the stage we can skip that action but I wanted just to pause it here and then we're gonna have a deployment. So, in my case I'm deploying first to the European region if we can have it parallel, right? At the same level. So, we can take a look how it will go. Okay, we are triggering. And from the console here we have a link but when we are clicking and I'm using SSO so I will not be able to see this cluster is because we would have to change the role that the matter if you're using here or we could do it SSO. Okay, so let the pipeline go and I would like to show you a little bit of the code. So, as I was telling we have a simple backend. It's a simple flask application. We are using environment variables. So, the structure of the code I decided to make it one Terraform module for the shared account. This is where we have VPCs that I was mentioned. I'm using modules for that so we are using AWS provider. I'm not sure if this is big enough for you. Maybe I will make it a bit bigger. Okay. So, we have two VPCs. One is in AP SOFIS. One one is in Europe. One of the challenge using Terraform for that this is one of the tips that they want to share with you is when using a resource access manager I decided to use for each function in Terraform and I had a problem for that because we have already using a module with VPC so for each will not take any dynamic data coming from other modules. So, that's why I put this comment inside the code that for the initial provision we would have to either take it out or comment it out before we can run it. This is something that I met with Terraform. It's kind of a limitation but it's for sure worth to know. So, speaking about the code pipeline and the tasks, things that we have to remember is that we have to provide two S3 buckets in both regions and for these buckets we have to create our KMS key for encryption and of course our code pipeline we need to provide specific roles for that. So, the first stage is source, we are using code commit here not a big deal and then we have a code build and when we go to the deployment stage this is where we have to configure. So, the standard configuration when we are using the same account will be only this part of the code but when we are triggering ECS cluster from a different account here is of course not the best practices, but we can of course refer to another resource but thanks to providing this role which is from the test account and we are defining the region we are pointing code pipeline to do the deployment so this is the thing that we cannot do from the web console. Then we have the second deployment second region and we can configure the order here so if you would like to have it to run at the same time then we can change the order and it will be executed at the same time. We have the approval step and then we have a similar configuration for the production also what I was thinking about when I was creating this code which is a shared account when we are using different providers I decided to add a provider that will have an access here I am using a default organization account access role but of course we can create a specific one on both of the test and production accounts and I have done it to have a small sub module with IM role which will allows us to interact with the ECS service so to have it deployed by once when we deploy this part of code we will already have prepared the roles on both of the accounts. I have also a module here for the backend part so this will be the load balancer with target group all the security groups configuration of the cluster one service plus task definition and this is also one thing that I want to share with you that it might be challenging while using ECS because when we use it with Terraform I think also with CloudFormation we have to specific an image when we are providing the service for the first time and we have to define some image here so of course we can put any placeholder here like nginx if there is non-production and we just have it run in the beginning after all when we gonna do any changes let's say we want to increase the CPU or the memory here at different environment or secrets or any other thing Terraform will try to revert that task definition to the previous image how I solve it here I'm pointing here to the environment which in my case will be test or prompt and in the code build while I'm building the application nothing so I'm using the commit tag from the git I'm just cutting it to have 7 symbols but when I build this image I'm also tagging it to have a test and prompt and push it to the repository so I will know if we are using this strategy of course using one branch of the git flow it will look a little bit different but in this case just to have it simple any time I'm pushing a new code on the master then I will know that even currently I'm running on ECS image with part of the commit tag if I do any changes in the Terraform code and it will create a new task definition referring to prod we will have the same application version ok, we see that the production is deployed so I guess if we refresh the page here then we have a new message so when I disconnect from the VPN the region should change and we see that now we are in AP southeast we are still using the same DNS and the same will happen with a test environment one thing to remember because I'm using here root 53 geolocation strategy so I decided when I'm passing some data down to the module I have to ensure that whenever somebody is going from another region he can still reach the application because if we would like to use only Asia and Europe so the people from the other regions they will not be able to reach the application so when we have this country word cut it will make a default policy and we can see it in root 53 here this is the default one it's pointing to AP southeast one it means that the Singapore region for us is the most important and if anybody is going from different region he will come up to us so if we take a look at the DNS checker we refresh we see that we can resolve our DNS all around the world but if we just change it from default to Asia I guess it will take maybe 10-30 seconds but we are going to start seeing here that some of the regions some of the points they will not be able to resolve the DNS so this is important thing to remember if you want to configure geolocation in front of any of your resources with root 53 at least one default it's a different thing when we are using latency latency will always try to find the closest place to be able to connect thank you I think that's all that I wanted to share with you today if you have any questions you can catch me somewhere around here I hope you get some value from it and have a good evening Any questions? Here we are building the how we can make the practice of like generally in CICD we go for the point of checking and for the smell or different practice like when I say cops and all those kind of practices so my question is that are we going to introduce any AWS SaaS product inside that when we want to do the security checkings and all how we can make sure that the build is going is a perfect value there is no wonder that it is nothing if they are passing something so how we can achieve that kind of practice I guess I would configure it whenever tool I would be using with code build I would ensure if there is some output that is not satisfying me I would cause a fail so we can do it either with exit 1 we can do some additional scripting inside the code build to ensure that something is preventing us from further deployment Ok, so let me give you support that I want to take the board as well as the vulnerability so you need to say that for that I can use the sonar cube so for the sonar cube I need to integrate inside the code build inside I mean let's suppose that if I write a pipeline into Ruby or some other then where we need to pass the sonar cube you are that is the mean the third particle that's how we can integrate I'm sorry I don't have so much experience with sonar cube I would do some checks from different pipeline for example I was working with a different vendor we had some problem with one of the tools JQ which is interacting with JSON files and we had a problem that JQ there was some mistake in the path and JQ was throwing out I think code 2 and the pipeline was going on so nobody noticed it for some time so I guess each case will be individual to catch up error or catch up the output that you will get during your pipeline I don't commit except apart from like is there any fun side we can introduce the Ruby I am not much familiar with the AWS CICD mostly with the JQs with bucket and GitLab CICD practices but just for my sonar perspective can we introduce the Ruby Ruby can we write the CICD pipeline used to go Ruby language I think this would be question to somebody you can reach out to I will sing with you we will point you to some of our estates what do you do what do you do what do you do what do you do the question is I saw that you were interacting with the work balance service and actually accessing the service through the work balance so the question is do you talk about balance and is there any method I can ask static IP address to access sorry, could you repeat I talk about balance is there any method I can actually access the ECS the idea doesn't dynamically change so I don't need it like smarter option AWS has something like execute so we can also add additional permissions and it's it's quite well described on AWS blogs, I was testing that with one of the projects that we can perform something like docker exec inside the container so we can connect it using AWS CLI to connect to the task and perform inside if that was the question, right we would like to connect to the running container which is it's actually running container running on the container so there will be dynamic IP address assigned to the ECS that actually we are using the work balance to connect the application instead of the load balance is there any method I can connect directly to the application without load balance from outside I would say no because we are deploying in my case we are deploying containers tasks to the private subnet so we would need to have some bastion house to reach out to the IP directly, but I think that would be some custom solution, right but that is the dynamic IP address so that keeps it changed so in this solution we have dynamic IPs but the ECS is doing for us it's registering the task to the target group so it's doing automatically when it's the deployment so I guess we can have these tasks registered to the target group but then we have to somehow point out to them or maybe query the target group to get these IPs and connect to be honest I haven't done it before so I'm just guessing from my experience that I would go into this direction and try to look there we have some kind of use kit because we are not on board of the load balance services but we are running on the target ECS so I was wondering like is there any solution thank you you're welcome to connect to the task it is possible on fireguards using AWS CLI thank you so much