 Hello. How are you doing? Great. Pizzas are great, right? OK, so tonight we're going to talk about DevOps. We're going to talk about IC. We're going to talk about CI-CD. And before we go to the sessions, how many of you already implement DevOps here? How many of you already using CDK or AWS CDK? Nice. The less, the better. So today we're going to talk about CI-CD and also CDK and how to integrate all of them. These are the two main pillars in DevOps. And I'm hoping that I can give you some guidance on how to use AWS tools, which is AWS CDK, to integrate between these ISC and CI-CD. My name is Tony Prakaso. I am a senior developer advocate covering for ASEAN and also AEM. I have 17 years each experience on technology. I started as a software engineer, and then R&D, and then SysOps as well. So DevOps is something that really close to my heart because I've been doing that for years. And I'm super excited to share with you about AWS CDK and implementation for CI-CD and ISE. And I love coffee or coffee. So if you want to have a discussion with me about modern application development, serverless or containers, and I specialize on microservices, just text me any message on Twitter, LinkedIn, a GitHub, or even YouTube. And then we can have a nice, copy discussion together. So this is the key takeaways for this session. So I'm hoping that after this session you understand how DevOps can enable you to improve development velocity by implementing ISE and also CI-CD and how to integrate with AWS CDK. And this is what we're going to deploy today. So obviously I'm not going to build this from scratch. I have everything ready on my laptop. But this is the architecture. We're going to deploy a serverless API. This is a simple REST API. We have four AWS Lambda functions and everything centralized within Amazon API Gateway and everything, all the AWS Lambda functions also integrate with Amazon DynamoDB. So now let me ask you a question. How many of you still provisioning the infrastructure through the console? One, two, you, Steve. OK, so I'm hoping that after this session you can start using AWS CDK. The reason why this is something that I would not advise to create your infrastructure through the console because you cannot have a predictable and repeatable architecture. So that's the idea of ISE. But first of all, let's talk about DevOps. So throughout my engagement across ASEAN and AM, this is the very first question when I talk about DevOps is coming from developers. What is actually DevOps? So in a nutshell, DevOps is actually a combination of three elements. Its tools, its practices, and culture. And what makes DevOps really interesting is that each organization, they have different approach on building stuff. They have different mechanism on how to deliver applications from development to staging to UAT to production. And also a lot of organizations, they also come with tools. They also come with tools at AWS. We have pillars from microservices, CICD, and then ISE, and monitoring, and observability. Another organization, they probably have different pillars on DevOps. But in a nutshell, it's a combination of three components. Now the reason why recently, in the recent years, DevOps becoming more popular is that with DevOps, a lot of organizations, they gain more benefits from practicing DevOps. So they have benefits from having more frequent on deployment from monthly to hourly or daily. They have less or changed lead time from one to six months. They can reduce it down to one to seven days. And also the change failure rate from 46% to 60% to 0% to 15%. And this is awesome. Why? Because it makes organizations be more agile on delivering applications and give failures to your customers. So this is awesome. But the problem is, in reality, not everyone moves to DevOps already. So this is something that I would like to elaborate why. Why hasn't everyone already moved to DevOps if the benefits are really that good? So the first problem, and these are the completion or feedback I got from developers. So the first problem is that it's too difficult to get started. So because there are so many tools, there are so many theory behind this DevOps. And then they need to use different tools that are not designed to seamless integrate from one to another. And then another problem is that they don't have any centralized oversight from all of these tools. And also it also requires some sort of expertise to actually implement DevOps with best practices. And above all, it's just too much for developers, for builders, for safe jobs to get started with DevOps. Because it's just too many stuff that they need to learn at once. Now, in this session, I want to share with you two pillars in DevOps. The first one is IAC, Infrastructure SQL. The other one is CI CD. And also I want to show you how to integrate these two. Now, let's first talk about IAC. Now, when we build applications, we usually juggle with all the codes, all the configurations. And the principal notion of infrastructure as code is that we should treat infrastructure the same we treat codes. Like for example, if we want to have our applications behave I mean, they behave same in any kind of different environments, we expect the same for our infrastructure. Now, the reason why is because all applications, they need to have an infrastructure. But that's not only infrastructure that we are going to deploy with IAC, but also architecture. And this is something that we often overlook, that we just focus on infrastructure. We tend to forget about the architecture. And then the second key reason why we need to implement IAC is that we could have a release infrastructure change using the same tools as the code changes. And then lastly, this is something that I really like is that we can replicate production environment in staging environments to enable continuous testing. So this is aligned with, if you ever heard about 12-factors app, anyone ever hear about it? Cool. So we have that one factor to build, release, and run. And then also depth-prod parity that we try to minimize the difference between environment. Why? Because that's important. If we cannot replicate the issue in the production environment, we cannot replicate the same on the staging environment. So this is really one cool technology. And that is AWS CDK. And I need to share with you that AWS CDK is not the only tool. In the past, we usually used best script. How many of you have used best script to a professional infrastructure? One, two, three. And then we move on to probably you ever heard about Ansible, and then Chef and a puppet. And then the evolution goes to some kind of abstraction like we had Pulumi. We had Troposphere. And then at the end now, we have AWS CDK. Now what makes AWS CDK is really cool is that you can code your infrastructure in programming languages. So in the past, up until now, we can configure our infrastructure using declarative syntax like YAML, like JSON as well. But with AWS CDK, you now can actually code using your preferred programming language. You can code on TypeScript. You can code on Python. You can code on, what else, JavaScript. And Golang is also in preview. So this minimized the friction when you code your applications and then you want to code your infrastructure, you don't need to switch context because everything is there in your IDE. So this is one of the super cool tool to implement ISE. And one key concept in CDK is construct. Now how many of you already use AWS CloudFormation? So CloudFormation, you can create a construct using L1 using AWS CDK. This is a low level. What you have, what you see on the CloudFormation, you can also create on AWS CDK using L1 construct. Now L2 construct is about abstraction. If you want to create AWS Lambda, if you want to create Amazon DynamoDB, you can easily create by instantiating on object and then you can create your own resources. Now L3 is more abstracted layer and this is something like if you want to create some kind of containers using AWS Fargate integrated with application load balancers, you're going to need to use L3 plus. And in my experience, I usually just go with L2 construct and sometimes I go to L1, but if I want to have more like an opinionated infrastructure and architecture, I usually go to L3. So that's the way it works on AWS CDK. Now another important part when it comes to tools is that it's not how we adjust ourself to use the tools. It's how the tools adjust our workflow. And this is something that I really appreciate from AWS CDK that it really doesn't obstruct or change your workflow. It's as simple as that you need to create your code and then you need to build it. If you use like TypeScript or JavaScript, you can just npm run build. And then you can go directly to CDK deploy if you want to deploy directly into your AWS account. You can also check any kind of changes from current state to the previous state by using CDK diff. And you can also generate confirmation template by using CDK synth. So this is really a first style tool and hopefully it fits into your workflow. Right. So I'm going to show you how that you can code your architecture and infrastructure using AWS CDK by implementing this serverless API. Can you see it? Oh no, you kind of, yes. Can we mirror it? Sure, I mean it will be fine with mirroring. Okay, can you see it now? Cool, it's the code. Yep, I'm going to zoom in. Right, it doesn't look good. All right, so this is how CDK codes looks like. It's written on Python, but you can also try to program your CDK in other languages. This is just one example. And the first thing that you will need to, because we're going to implement, right, we're going to implement, let me swap it again. How do you know I'm going now? Okay, so because we're going to implement the serverless API, we need, first of all is Amazon DynamoDB, right? And then the other components is four sets of AWS slum depth. And then lastly, we need to have Amazon API gateway. But we also need somewhere in between and that is IAM permission. So we're going to configure all of that in AWS CDK. Okay, right, so, right. So the first one, the first thing that we need to define is DynamoDB table. And this is how you define the DynamoDB table. These, in these lines, I define a DynamoDB table with table name this. I have a stack prefix over here. It's just like an ID. And then I define the partition key. I use ID and attribute type is string. And then for this DynamoDB, I also set the removal policy, which means that once that I don't no longer need this architecture, this system I can easily remove everything. I need to mention that this is not for production use. This is only for demo. And then I also define the read capacity and also write capacity. And this is amazing because everything that you see on the console, you can program them on AWS CDK directly by using this object. And you can define all the properties that you need. And it's not only this, you can also configure auto scaling. You can also configure the DynamoDB streams if you want to trigger directly into AWS Lambda function. So everything that you see on the documentation, you can also program on AWS CDK. Now, the next thing that I need to do is to define the IAM role. Obviously that I need to give the permissions to the principal permissions to lambda.amazon. AWS.com. And then I define these permissions to create locks group on Amazon CloudWatch. Why? Because obviously our AWS Lambda needs to have an access to CloudWatch locks. Now, the next thing that I need to define is the access to DynamoDB. Now you can see that here we have granular control on what are the things that you want to give an access to your AWS Lambda. It's not only AWS Lambda. If you are running containers using Fargate or using EC2, you can also define the task role using this approach. This is called less privilege. Means that I'm not going to give all the permissions, only the permissions that I do think that this application needs to access. And that is to connect and integrate with DynamoDB. Now, the next question is that, do we need to guess all the actions that we need to give to our Lambda or Fargate or our services? No, there's actually an easy way to do this is by running DynamoDB calling by using GrandRid. Like for example, right. And then you can just give GrandRid to your AWS Lambda. That's it. But this is the way I like to configure my IAM role, simply because I have full control on what I want to AWS Lambda access. Now, the next component that I need to define is AWS Lambda itself. Here I define the first thing is the code. Now, the next question is where is the code? The code is actually here in my IDE. So if I just go to this folder, I have all AWS Lambda functions. And this is what I meant by using AWS CDK. It minimized the friction for you to switch contacts from defining your architecture and programming your application. Now, here, this is just the usual properties on AWS Lambda, like time out, and then the role itself, and then enabling to tracing, this is integration with AWS X-ray. So you don't need to do that manually, everything's already configured here. And then I define the runtime here, Lambda runtime, pattern 3.8. And you can also integrate with AWS Lambda layers. That's something that probably we can discuss and it deserve another session. But essentially, AWS Lambda layers is like a library that you can integrate nicely with any kind of AWS Lambda functions. So, like why I said before, that anything that you have or you can see or you can configure an AWS Lambda console and AWS confirmation, you can actually do it with AWS CDK. And then, but when my AWS Lambda receive request, STP request, it needs to save or it needs to look up into DynamoDB table. Now, the next question is that, how that I can pass the DynamoDB table name from my AWS CDK to AWS Lambda? We can do that by adding environment variables. And this is defined here. Simply just call add environment table name and then reference to the previous object, which is DDB table name. And then we can have it on AWS Lambda function. The same things for other AWS Lambda functions on list data, get data and delete data. Now, the last element that we need to create is the Amazon API gateway. But so here I implement the minimum requirements of fresh API, I'm utilizing the STP protocol. And the first thing that I need to do is to create an API object. Here I have this fresh API. This is already on AWS CDK. So you don't need to create from scratch. And then I need to define the integration for our AWS Lambda function. And then I add resource. So I add resource data. That means that when the API gateway created, I can access the endpoint and this AWS Lambda function by adding slash data. And then for slash data, I want to define two method which is post and get. Post is to save data and get is to list data. And then I also want to look up by ID, by the data ID. I can easily do that by using this directive and then add method and then forget and also delete. Now the rest is just the output. I want to see the table name of DynamoDB and also the URL of Amazon API gateway, one CDK finished processing. And then lastly, I just define the core applications. I define the stack and then give some text and that's it. All right. Now let's try to run this. Let me try. So I mentioned that the AWS CDK, you can generate CloudFormation template and you can do that by running CDK since. And this is what it looks like. I know it's quite long. Right. So all of these are the lines of CloudFormation that you can actually pipe it out into a file and then you can just plug it into CloudFormation and it's going to deploy your architecture, right? So and imagine that we can easily program all of this without having to learn the AWS CloudFormation right from AWS CDK. And when talking about the deployment with AWS CDK, you can actually just deploy it by running CDK deploy. And this is amazing. Why? Because it minimized the friction for us to deliver the architecture from our development to production or any kind of environment. And here I have this API gateway, sorry, the API endpoint here. And if I test it out, slash data, I have two data already on my DynamoDB. Now what happened if I want to add more data? Oops, what do you see? Oops, sorry. My bad. Oops. This is something wrong. Right, so something is wrong here, but I assure you it's actually working. I think it's somewhere here. Oops. And yes, success. And if we go list down the data again by calling this data, we can see a new record already inserted into DynamoDB. So it is that easy now to provision infrastructure, configure the integration, make an architecture, even to deploy your codes if you use AWS Lambda functions, or even if you have a containers running on Fargate, you can do that everything on AWS CDK, okay? So yeah, so that is the first demo. And is this the right screen? Right. Okay, so moving on. Now we have covered about ISC. What you just seen was ISC. It's infrastructure as good. That we treat architecture the same way that we treat codes. But still, the way that you saw that when I wanted to deploy my applications into my account, I need to run CDK deploy. But what I need is actually when I push my codes into a Git repository, it will trigger an automatic process to deploy directly to any end-farmers that I have. But it's not only any end-farmers, there are a couple of steps or processes that we need to do before that. So, and this is where we come to the CI CD. So in CI CD, we know there are a couple of stages. This is called as release process stages. This is the minimum stages. First of all, we have source when we check in source from our Git repository and then we build the source. We compile the code. We do the unit testing, checking. And if you are running on containers, you need to create containers. You need to do the docker build. You need to push it to the image repository as well. And then moving on to the next stage is integration testing. We need to do a lot of testing over there. Load testing, UI testing, security testing, anything, any testing, you just name it. And last stage is deployment. We need to deploy the applications to any kind of environments. It's not strictly only for one environment, but if you have like a UAT, you have staging, you have a client staging, we need to be able to deploy to those environments as well. But now, I'm sorry, this one, okay. So these are the release process stages. And how does it relevant to CI CD? So there are three in CI CD, although it looks like only like a two stuff in that acronym, C-I-N-C-D. Actually it has three kind of definition. CI stands for continuous integration. CD stands for continuous delivery and continuous deployment. So continuous integration is as simple as that we have this automated mechanism to get the source of our codes and build it as an artifact. So this artifact is the one that we're going to deploy. Now, continuous delivery is where we get the source, we have a mechanism to get the source and then we build the source becoming an artifact. We test the artifact and before we do the deployment, it needs to have a manual approval. Continuous deployment is like an automated process. And I've seen a lot of developers they use continuous deployment. I also knew some developers who use continuous delivery but the most common question asked to me whether I need to use continuous deployment or continuous delivery. My answer is that if you can rely on testing, if you can rely on integration testing, security testing, log testing, unit testing, then go ahead with continuous deployment. Means that when you push your codes into Git repository, it going automatically deployed to any kind of environments. But in some cases, we also need to have an approval before we can deploy. Sometimes that needs to have, we need to have an approval from other business team or other departments and that's when you need to implement continuous delivery. So what happens in continuous delivery is that before it goes to deployment mode to any kind of environments, somebody needs to click approve, provide reasons, and then it's going to deploy to the environments. So that's in short, that is the definition of the CI CD, right? And then the next question is that what are the best services we can use to implement this? The first, to be honest with you, there are a lot of tools that you can use. You can use Jenkins, you can use Travis, you can also use GitHub Action. But my preference is AWS Code Pipeline, not because I work for AWS, no, because that AWS Code Pipeline, probably 30% of it, yes. But AWS Code Pipeline is super simple. Now the problem that lies on implementing DevOps is not the tools is complicated, but our perspective is too complicated that makes it hard for us to implement DevOps with Code Pipeline is super easy. Let me show you a sneak peek to the next demo. Is it the right one? Is that the right one? Is it the right one? Okay, cool. So this is the right screen. This is what AWS Code Pipeline looks like. You have the flexibility to define all the stages here. Here I define four stages. Sorry, it's actually more than four stages because this is the deployment from AWS CDK. But essentially we have source to get it from MyKit repository. We have build to build the application and we have testing here inside the staging. And this is super simple because what we need is that we need to ship our codes into the production so it can deliver value to our customers. That's it. And this is why I love AWS Code Pipeline. And, right. And the other service that you can use is AWS Code Build. Oh, my slide again. Right. Is this right slide? Presenter view. Presenter view. Is this the right one? I mean, slides is not a full screen. Right. Presenter view. Okay, so let's use another method. Oh, by the way, all these materials, including the presentation and codes is available on GitHub. I'm going to send you the link. All good then? But, okay, so apology for that. Another service that you can use is AWS Code Build. So the AWS Code Build is with AWS Code Build that you can easily have a machine to build your applications. So in the past, we usually need to have a dedicated machines to build our application. But with Code Build, everything is fully managed. And the beauty of having AWS Code Build is that you can integrate with AWS Code Pipeline. So anyone here use GitLab Runner? Right. So that Runner is actually something that you can use with AWS Code Build. So AWS Code Build is a fully managed build system that could help you to build your applications by running the script. So the next demo that I'm going to show you is how to implement CI CD with CDK pipelines. Now I mentioned about CDK has a construct. It has a three level construct. And CDK pipelines is level three construct. And what it does is actually help you to create a release pipeline using AWS Code Pipeline and AWS Code Build. So what we're going to do is we're going to implement CI CD with AWS Code Build. Sorry, with AWS CDK. Right. So here is the AWS CDK code. And here I define a pipeline stack. Yeah. And then above this line, I also define the account ID and the region. And why I need to define this because with AWS CDK pipelines, you can actually deploy to multi-accounts. For example, you have one account for your staging and another account for production. You can deploy it to multi-accounts. And you can also deploy multi-region. So this is something that you can do with CDK pipelines. Now the next thing that I need to do is that I need to define the pipeline stack. So the pipeline stack is really quite straightforward that we need to create a code pipeline here. And then we define the steps that we need to in our release pipeline. So here I define shell step. There are two kind of steps that you can do with CDK pipelines. There are build step and shell step. Shell step in a nutshell is just that you can run shell script commands in your building stage. And here I define the input. This is where I define the Git repository. And then I choose the branch which is the main branch here. And then I define the GitHub connection ARN. Now you're probably asking how can we integrate our Git repository with CDK pipelines? Now this is the answer by defining the connection ARN. And there are various way on doing this you can try to store your GitHub token if you use GitHub, you can get the token or you can just create a connection using AWS developer tools here, connection. Is it too small? A little bit small. You can just create the connection and then it integrates with Bitpocket and GitHub. And then, yeah, you can just finish the wizards and then at the end you're going to have the results like this, like GitHub connection. And this is something that I really like. Why? Instead of using GitHub token, the first thing is that I don't like to store token anywhere else. And the other one is that with the developer tools connection, I can reuse the connection. So not only I can use it on CDK, but I can use it to other tools within AWS ecosystem. Cool. So next one is the command. This is basically the command that I need to build the application. So if you see on my GitHub repo, demo number two, here, this is the GitHub repo. And another beauty of CDK pipeline is that you can define the source folder. So regardless, you are using mono repo, or using multi repo, or using GitHub models, you can define the repo that you want to deploy by defining it here on the comments. And then I install AWS CDK. And then I install all the requirements. And then I do CDK send here. And then CDK send, we're going to create the Cloud Formation templates here. And then it's going to deploy. And I can also define the stage, which is the stage here is staging. And that's it. This is CDK pipelines. And at the end, when I deploy this application, this is what I have. I have source, I have build, I have update pipeline, I have assets, and staging. That's it. Now, the next demo that I want to show it to you is that, OK, so that's all good, but that is a continuous deployment. I don't want to do continuous deployment from my application. How can I implement continuous delivery by having a manual approval before I deploy my application? And this is the next demo. So we're going to create two stages, a staging and also production. But before the application can deploy to production, it needs to be manually approved. And then I also add some kind of integration testing on the staging stage. So the way it works is that, similar to the previous demo, here I have, this is the selfless API, the same code that I used for the first demo. And to add a stage, you can just create another stage, which is I called as production. And then I add stage here. And then I add the pre-activities, which is a manual approval step. And I call it as deploy to production. So let's try to deploy this. Mm-hmm, oops, not this one. Right, it's going to deploy the city application. So meanwhile waiting, let's open Q&A. Is there any questions? Yes? OK, developer friendly, you can go over to the right-hand part. On what level the developer needs to understand AWS Cloud infrastructure in order to actually run out of this? Right. So the question is, OK, so, sorry, what's the question? OK, obviously I'm already set for what I see. So on what level the developer needs to understand AWS in terms of what you're saying? That's a really good question. So yes, it's developer friendly. But I do think at least when they start to creating the city application, they already know what are the services and also the minimum properties when they want to deploy or architect the application. So when I jump from a console to AWS CDK, so this is my learning stages. I learn what are the properties on AWS console, right? And then I try to read what are the properties equivalent in the CDK documentation, and then I just apply it on AWS CDK. So my answer is that I think the developers need to know at minimum level what are the properties or configuration they want to deploy the architecture when they are developing AWS CDK. Cool. Yes? Hey, this one is going to be hard, so thanks. CDK is really nice. But what's the scenario when you're working in large organization and there is a lot of legacy stuff. Yeah, so how can you deal with that? We can import fixing to CDK, correct? So before this session, I actually list on all the possible questions. And the question was not on my list. But I really appreciate the questions. So apparently, so may I ask you, have you implemented any kind of IAC before? OK, so are you using Terraform? Yes. Cool. So the good news is if you're using Terraform, you can also use AWS CDK. Why? Because with AWS CDK, we have an extension called CDKTF. It's a CDK for Terraform. So basically, it's do what you are doing, all the things that you need on Terraform, and it could give you like a syn on SEL format. But then again, that really depends on what kind of IAC that you have implemented before. So without any kind of configuration, in this case, like using Terraform, they need to manually build the CDK application from scratch. I hope that helps. I hope for CDK frequency. Oh, yeah. No worries. Yes? It looks like Python. The screen is just a regular Python. Sorry? When you demonstrate, OK. When you demonstrate the CDK script, it looks like Python. Is it regular Python or some subset of Python like what they did? Oh, right. That's a really good question. This is a regular Python. What I did was I just implement the library available from AWS CDK. So if I go here, this is all the libraries that I need. There are only two libraries, and I'm using CDK version two. And if you are using CDK version one, I really recommend you to migrate to CDK version two because it's just make it a lot of things easier. You only need to import two requirements, and then you can use all the libraries inside the Wishing, the Python, or even another languages. Is that one of my main question? So if it's regular Python, you need to have infinity loops. You have a stack, you have to have all this stack. I'll give you that thing. Because in the regular structure, we can make quite a lot of work moving from code to declarative languages for these tasks. And here, I can see a game where it comes back to the code, which makes the construction stable. How are we moving? So the question is that, which I can use the AWS CDK using like if statements and loops and all those flow. Yeah, so yeah, basically CDK is just a library. And it accompanied it with a CLI that you can use to run that application. So the way it works is not different with what you usually code your application. You can use the library. It also has, I'm using tab nine, so it has a good auto-completion. And you can do whatever you usually do with your application. So you still need debug. I mean, you need debug infrastructure. Yes, you can also debug it. You can also do unit testing for your infrastructure. This is that good, okay. This is basically templating in a conceptual way. Yup. This is not a manifest. Sorry? This is not a manifest. It's not a plate. It's not a plate. It's like you're coding the infrastructure, but it leads you to a lot of issues. That's my main concern. I see, I see. Oh, what's your concern? What is the issue? The main issue. So you have instability. You have loops, means you can have infinity loops. You can have access to environment variable or even web pages, whatever. It's a gain and you instability. So when you use a regular language instead of declaration, you introduce all this instability and it like downgrade, not benefit from my point of view. That's why I'm trying to understand how you're dealing with all this instability which you introduce by using career during complete programming language instead of declarative language. Right. Why do you need to do, or why you mentioned about the instability? That's something that I don't get it. Because you can use your complete language, you can. Right. And if you can, sooner or later, you will do. Oh, I see, because now it's much more flexible using AWS SDK compared to using declarative language like YAML or Tommel. Right. So I have never been there before. So I think this is their pros and cons about using AWS SDK. So the advantages is that you can be more flexible on defining the architecture. Like when you want to define the, like want to define like a Fargate containers using ALB. That's something that you need to do from scratch if you're using a declarative language. But with AWS SDK, you can pack them as a library and then you can share them with other teams and then they can use that kind of stack right away. So obviously there's pros and cons. I do understand where you're getting at, but in my case, in my experience, it gives more flexibility and that kind of instability that you mentioned, I usually do it on unit testing. So usually I do like a unit testing, like for example, if I want to have this AWS Lambda to return exactly 30 seconds and I, if I wrongly define it as 60 seconds, it's gonna fail, it's not going to deploy. So there's something that probably could be a God Rails or whenever you use AWS SDK, but I'm happy to have more discussion with you. Awesome. Yes. So how do you manage the questions? Like, is there an issue with the questions that, for example, I would have the same case of a different question now, that could be a whole different case, how do you think about improving the questions? All right. So I think that you should put the questions. Yeah, that's a really good question and that's something that we address on version two. So I'm guessing you're still using version one? Or using Terraform, okay. So previously when you use AWS SDK, the version one, you need to import them one by one on the requirements.txt or package.json. And with the version two, all you need to do is just to define them here and then you can use all the libraries directly in your SDK application. So this is something that I love with SDK version two. So this is all about the versioning. You don't need to worry anymore about that. All right. Okay, so are we running out of time? But before, I think we have deployed the SDK application. So I just want to show it to you here. This is the API pipelines, right? And it's still ongoing. But now, after this is going to create the code pipeline. Now something that I forgot to mention that the powerful SDK has the features called self-muted. So when you use, like in the past, if you use a SDK to define the CI CD, what you need to have is that you have a stack to define your CI CD and another stack to define the application. Now with SDK pipelines, it has a feature called self-muted. So whatever that you change on your architecture is going to self-muted the architecture itself. So this is something that I found it really powerful that I didn't see anywhere else. And it is stated on update pipeline. So once that it's finished, it's going to create another stage called production. It's going to create a manual approval before it goes to the production without me having to do the SDK deploy. So imagine that you have that kind of like a flexibility and agility to define your architecture. You just push into your Git repository and it going to update by itself. So yeah. Right. I see there's another question. Yes? Yes, sir. So if you see it through writing infrastructure as coding different languages. Yes. But imagine you have the code now in some language and the guy who wrote it is left, is gone. And you want to change that code to the language of your future. Is it possible or you need to write it from this? Okay, so I tried that before. And the way it works is that with SDK, you can export them into CloudFormation, right? And then when you want to continue the work, you can actually create, import the CloudFormation into your new SDK application. So it's not like changing the codes exactly lines by lines. It's the matter of like a importing CloudFormation into your new SDK application. Good. Right, yes? So if we get a lot of things in the SDK, so why not use AWS Amplifier instead of SDK? Are there any advantages over Amplifier? Okay, so that is a really great question. Why don't we use AWS Amplify? Why we use AWS SDK? So AWS Amplify is another surface and an open source framework and tools from AWS for you who want to abstract away the complexities by only using one CLI tool. So with AWS Amplify, you can create a web, you can create the mobile applications and you can also create the backend APIs and integration with machine learning as well. So it's really one powerful tool. But there are some cases that the Amplify doesn't cover all the AWS services. And with that, Amplify actually has an app on called AWS SDK app ons that you can easily plug your SDK application into that folder and it's going to deploy the architecture or the system for you. And it's going to refer back to your AWS Amplify application. So the answer is that not all services covered by AWS Amplify but we offer the flexibility by adding an app on for AWS SDK. Cool. Yes. Yes, oh yeah. Sure. I'm sorry, can I hear? So I know with CD and with Terraform, we have Terraform plan. And there's something that you want. Okay, so with CDK that you can actually do this and it's going to show you the difference between this current state and the previous state. Oh, hopefully it's running. Oh no, it's not yet. So this is the difference. This is all the things that I changed which is not the desired state yet. So it's not like a equal comparison or equivalent to Terraform plan, but at least this gives you inside what did you change and how we're going to affect all the resources in your AWS account. Right, so, right, so cool. Okay, so, right. So lastly, I just want to let you know again that all the content and materials for this, sorry. It's available here at github.com, AWS community, ASEAN. And you can get the, you can get PPT on the speaker deck and demo on GitHub. It's ready to use, demo, it's ready to deploy. And if you have any other requests, just create an issue and I'm going to create the content for you. All right, thank you so much everyone and have a good time.