 Again, hi everyone, this is good. Okay. I think before I start, let's make this session interactive. I just don't want to talk over and over. You can stop me anytime, and I'm happy to answer your questions as well. Or I will try to ask you questions in between, okay? Yeah. With that, let's get started. I'll be talking on AWS Cloud Automation with the Terraform and Jenkins. Can I ask you guys how many are using Terraform today? Oh, that's awesome. I think I have a right audience. How many of you using Jenkins? Oh, pretty awesome. Okay. So this session is aiming to automate AWS Cloud Platform end-to-end. So with a single click, you can provision hundreds of resources with the fully integrated. At the same time, you can give a self-service facility to your developers or the end-users, okay? Before I jump into the actual content, so just a little bit about me. My name is Ashok, and I'm one of you like tech engineer. My passions like, if you look at here, these are few, but yeah, there are many. So I mainly focused on Cloud Engineering and DevOps Practices. I'm also one of the community foundation member for FinOps. That's why you will always see talking cost optimization for Cloud. And also I'm more into security. So my solutions, my talk will involve the security. How many of you from cybersecurity? Oh, good. Okay. This is for you guys. This solution helps developers to self-serve and security guys to control all the changes on the infrastructure. At the same time, Cloud Engineering platform teams to produce more modules just instead of provisioning the infrastructure. Right? So let's get started. The key takeaways from this session is you will be knowing why you need to build a custom solution when there is a lot of SaaS products are there. One of the SaaS product is Terraform Cloud. But why you need to build your own solution with the Terraform and Jenkins like open source tools. So I will be talking on the common security considers to build your own solution. How many are from FSI or the financial industry? Good. So if we know, so probably MSTRM guidelines for the public cloud. So you always need to have certain considerations when you are using any product. So I'll be talking on those considerations in that. Then how you can enable self-service with your IAC, for example, Terraform, but you can also use CloudFormation. But I'll be covering only Terraform today, but it can be covered with any IAC tool. So then I'll be walking through the tech stack, which components or which AWS services you can use to build your own automation platform, and the architecture, reference architecture, probably for production grade with the multi-AZ and the scalability and high availability perspective. Then the branching strategy for multi-accounts, as Chetra mentioned. So when you have a Cloud Center of Excellency or the control tower, you probably spinning up multiple accounts for a project or a business unit, or probably for some of the teams. So how can you quickly provision resources on that? So it's always a day one pain point. So we will be covering how you can strategically plan your branching strategy so that multi-account provisioning can be make easy. Then along with that, there are certain gut rails and best practices to use the solution. So with that, as I mentioned, this talk is more on the people who involves day-to-day operations. So Cloud Team, probably if you are a startup company or maybe you are a big firm, you have a duties of segregation, you want to do only your job, and then you're just getting a flooding with a lot of Cloud Infrastructure requests, which is a repeated task. Then probably you are just engaged only for provisioning infrastructure, but rest of the security will be managed by security guys or database guys. Some of the components may be managed by different team. How can you manage all those things in a most effective manner? Probably if you are looking for self-service a manner for the developers. So one of the common ask from developers always, say, why I need to wait for you to provision infrastructure? Always they'll ask, can I able to provision my infrastructure? But yes, you can give, but then the security guys comes into the picture. No, I will not allow because there could be a risk and security concerns may come. Yes, with this solution, you can actually bring all parties together and you can create a common pipeline for everybody in organization. Probably you have here, I just mentioned only three people, but it can be for more people, you can bring into this same platform, and you can make them as a accountable person or the responsible person for their resources provisioning or the modifications. As I mentioned like security guys, always they'll come after Cloud Team or DevOps Team. So yeah, you can collaborate with them, they're always with a, it is like a track. If both teams are running, then you can succeed. So you can actually onboard them, collaborate with them, then explain how you can automate. All their concerns you can codify and you can use IAC to codify all the policies, all the compliance rules, then you can build the solution. So as I mentioned, these are the common questions from management perspective or the security perspective or the Cloud Engineering perspective. Every organization, mostly the big firms like financial industries will have these questions. If you look at the persona, sorry, if you look at the first one, can anybody guess who can put that question? This one, it can be from Cloud Team. So we want to maintain standards or we want to maintain policies across the Cloud environments. And then the second one, can anybody guess? So probably from your developers or your product owner or the management, they always want to run the business as quick as possible. So how can we get all this? Then the third one will be always, as I mentioned, our dear friend, security guys. So they always want to maintain security and compliance without exposing the organization into risk or compromising any of your environments or not to introduce any vulnerabilities. So these are the common considerations for building a common, your own solution. So with that, let me bring into the, so these are basically from MSTRM guidelines I took. Why you need to build your own network or your own policies, authentication and authorization, data protection and artifact management. So these are the commonly asked or common guidelines that every form need to be followed as a financial industry. Until unless you are completely going after SaaS product which has flexibility to have your own authentication with your probably Active Directory or RSS or your ADFS and you have something like you can control with your own policies, maybe with IMs and all. But most of the SaaS products which not compatible for your organization needs. So how can you bring all this into a solution that can fit for your needs, correct? So we are going to answer all these requirements with the custom solution that we are talking today. Anybody have any questions? Let me give a pause, let me see. Anybody has? Okay. So I think I'll be not covering what is Terraform and what is infrastructure as a code. I believe everybody are aware. So one of the, I think all are cloud, right? So mostly everybody is practicing cloud. So how many are following ABS guidelines? Association Banks of Singapore? Okay. MSTRM? Okay, good. So one of the strongly recommended method for provisioning or updating your cloud infrastructure is via infrastructure as a code. It should not go and do manually from the console. Of course, earlier I also did manually in a console, but eventually moved to automating the stuff. So how many are still provisioning resources in console? I think this will help you to bring your approach from doing manually to automated way. So as an automation, so infrastructure as a code, you have various tools in the market, AWS has its own tool, CloudFormation. But widely used one is Terraform. So we will be covering that as a solution stack. So when you have a Cloud Engineering team, and we will be responsible for provisioning your infrastructure. So probably if it is one guy or multiple guys, they will be writing a code and then pushing into one person control system or the repository. From there, they will be running some commands. Most of the cases, they are also running commands in their local laptop since the code is there. But how scalable is that? When you have a multiple people, so then you will bring in some version control system to have multiple people to commit the code or the changes to be applied. Then you will be applying that into target environment. So this is simple. So everybody knows infrastructure is a code as simple as that. But what if that is scaled to multiple environments, as previous speaker told, right? So we have a control tower, right? What it does, basically you can provision multiple accounts, organizations, with the minutes of time based on your needs. But how difficult when you are trying to onboard a project into all these accounts. Probably you have a security account and our security team says, okay, all the KMS keys goes there, right? Or probably your database teams, network team says, hey, you need to create all the subnets into network account then share with the target account, where you are going to deploy your workloads, right? So that's the main concept of the landing zones and the control tower, right? But whenever there's a new account comes, how quickly you can provision your infrastructure? That matters, right? That matters because business always waiting for you to provision in the infrastructure and the developers are waiting for you to deploy their code. So that's the whole point, how we are going to cover in this demo. If I time allows, I will be covering that. So with that, let me bring in the traditional way. I think everybody knows where it started, the cloud journey, right? From console to, you know, one person running CLI commands from their ISC tool, from there now automation with some pipelines, from there now everybody is expecting self-service portals, right? The self-service portal. So let's see if you are a one single team responsible for cloud infrastructure provisioning and operations. You will be having a lot of, you know, the tickets, right? So probably you will be receiving 10 requests in a day and the 10 requests into 10 services, right? 100 services you need to provision. If it is a small team, how can you boost your productivity, right? So you can't because all the projects are waiting for you to kickstart their journey, correct? So with that, probably you are maintaining your security and guardrails and you are fully accountable for what you are provisioning and what you are, you know, spinning up and you have full visibility. The problem with self-service portal or the self-help model, you don't know what developers are provisioning, right? So the moment you have given access to them, they can provision whatever they want, right? So probably the whatever the business is need, right? So with that, how can you enable securely to give access and ensure there is no vulnerabilities are introduced or your organization is not exposed to any risk and you will be putting, you know, controls the same and you are just dedicated, delegating the responsibility to them and also you will be there to see what's their provisioning, correct? So this is a new approach what everybody is looking for and the market is looking for. So we will be covering that, how you can give your developers to provision your infrastructure, right? So with that, let me pause here for a minute and see if anybody has any questions or queries. Anybody? So yeah, if not, then yeah, let me continue. So with the new approach, the benefit is your developer can provision your infrastructure, okay? But he's not provisioning by his own code. As a cloud engineer or cloud operation team, you will be writing your modules, you will be maintainer for it and you know what needs to be there or what access can be given to developer or how the resources you want to provision. Probably you are also onboarding a security team or networking guy, what is their activities? Along with the infrastructure provisioning, you want to pump in all your security policies. Yeah, that can be do just collaborating with your internal team, right? Again, that can be via some policies or the policy agents, right? So with that, let me bring in the, how this is one of the approach, but yeah, there are various models available. So with this, you can enable the automation with the self-service and self-help model and you can bring in lot of parties into your stakeholders or probably your developer security team, network team and whoever have a shared responsibility into the cloud, right? As I mentioned, always it start with one guy or as a cloud team or cloud guy who is responsible for provisioning and maintaining a cloud infrastructure, right? So then you will be designing your modules. Modules are nothing but the way you want to provision infrastructure. Variabilized way, you're putting all as a stack with all the integrations mapped into one-to-one, right? And probably you're also putting some guardrails around it. Security policies embedded it, right? Then you can also maintain baseline configuration. This can be your sensitive information, which role you are using, which policies you are using or which controls you are putting. So that you should not give to access to developer. Yeah, so that can be maintained in other repository where it's can only accessible for your orchestrator, right? So then you can bring in your security guys or SRE guys or platform team or somebody who wants to automate their job along with the cloud. Let's say if I want to provision EC2 instance, so there are a lot of dependencies are required, right? So one of the dependency can be network, right? So if there is a dedicated network team who responsible for creating it, right? So you can also bring them and see what, how they want to see their network to be looks like, right? So you can actually codify all those requirements or the standards what they are maintaining, right? And then you can release the module with a combination of your infrastructure and the dependencies. The dependency can be IM roles, IM policies or the some agents like New Relic and all this stuff. You can codify the installation steps, then release the module. Since the modules are there, now your developer can come and consume those things, right? So this can be in a typical developer way, you can come to the repository and read, check the readme file or you have a portal where you have mentioned how to use this module with a given code snippet, right? So then your developer takes that code and you will be provisioning a one repository for them, where to commit the code, correct? So then once they made the changes, probably the changes can be only on the TFWords, right? Which is equivalent to the build specs or app spec concept in development, right? So they'll put up all the requirements in TFWords, then they'll combine the module code and the TFWords and they'll commit the code into project repository. Once the pull request is approved, here you can impose strong doer and checker policies which every company wants to ensure there is no one guy doing everything, right? So probably if your junior dev guy is provisioning or modifying a code, senior guy is approving your pull request, right? This ensures that there is no accidental typos and all this will be introduced. Once your pull request is approved and that will be merging with your respective target environment, the target environment you can consider as a branch, okay? I'll be talking about branching strategy later. So then that kicks into your automation pipeline, right? So we will be using Jenkins as our administrator to run end-to-end automation. So what it does basically, the pipeline typically have the workflow. The workflow can be the way you want to provision infrastructure. Probably you will be having static code analysis from the checking out the code, right? Then you will be running all the analysis on your code and then you are running some security scanning to find out is there any 0.0 is open or your database is public, right? So all this can be scanned with your God Rails and the policy agents. I'll be talking what can be used there. Then you can also have as a cloud team always want to know, hey, I don't want to surprise by end of the month by seeing a $10,000 bill, right? So you can actually have a one more step there to find out how much the bill is going to be on by end of the month, out of the developer commit, right? You can actually check that. Then you can always have a annual approval stage to review and approve the changes. This ensures that you are not, you know, risking your infrastructure. Probably Terraform is a, everybody knows it always maintains the state of the infrastructure, right? If you change anything, some of the resources always will get replaced, right? So this manual review and approval stage can be used for, you know, your final go. You're telling that now it can go and deploy. Until then, your infrastructure is not going to be deployed, right? So with that, it will go and deploy, right? So this is the typical workflow, but there are various steps also can be involved, right? So yeah, so this can be your self-service model as a core development view, but I'll be coming to share the portal way of doing it, right? Okay, so now you know the workflow, right? You know the workflow, but you don't know how to build it, right? Which components to use it and what platform and, of course, this tool is for cloud provisioning, right? So you can also extend to on-premise and your VMware and other environments, other cloud, but I'll be talking on the AWS. So these are the components that I've been using in, you know, the cloud automation platform. On top of that, I'll be bringing the orchestration engine, which is a combination with Terraform and Jenkins and TFSec and InfraCost. How many are aware of these tools? Probably TFSec, TFSec, security guys, no? Okay, good, good, good, I think there is. And the InfraCost, good, thank you so much. Okay, so these are the tools that I have mentioned in my previous slide, where it does all your, you know, the workflow be completed. And on top of that, you design your workflow. This can be fully customized and these are all tools are decoupled. You don't need to worry which tool need to be replaced tomorrow. So probably today you are using Jenkins, tomorrow you want to use GitLab, GitLab pipe, CICD, right? You can actually do that. The commands are saying that the foundation is your Terraform, Terraform ISC code, right? And TFSec can become replaceable. Today you are using TFSec and tomorrow you want something else to be used as a, you know, TFSec is open source and you want to bring in commercial version, you can actually bring in and plug it in with your solution, right? And the workflow is interesting, right? As I mentioned, always it start from pulling a code and running, you know, the initialization of Terraform and static code analysis, plan, and out of the plan you'd want to know how much cost is going to impact and then put the, you know, manual approval gate, correct? Then after that it will be deployed. So this is a technical stack. Yeah, any questions? I think I'll be running, yeah, please. At this moment as open source, we have only one tool but Terraform has its own tool and enterprise you can actually scan. And this infra cost has the paid version and yeah, paid version and in fact, you have open source as well. It gives you a nice view. I'll show you the slide later. Thank you for the question. Okay, this is the high level architecture for production grade system where you can give your developers as they access from your ADFS or SSO within your organization and you will be having all the security consideration which I mentioned in my previous slide, right? So your encryption address with the KMS, secrets will be managed in Secret Manager, your private repository in code comment, right? SQS, SNS for webhook. How many are using webhooks in Jenkins? Is that with the AWS? Yeah, everybody knows there is no webhook as a native but you can do with the customization of your Lambda and SNS and also you can do another way which is SQS and SNS, right? So this is another way to do it to automate end to end. The moment developer commits the code the pipeline will kicks in, right? And the flow as I mentioned, so you can have one common tier bars where you want to maintain your common infrastructure probably which role to connect which environment and the which VPC to be there. So these all can be your infrastructure related data and this project code is always have the project related thing. This tier bar can accessible to developer you can modify whatever you want but those can be overwrite with this. Let's say you want to, you know, change the IP address of something, right? So zero dot zero. How can you override your developer thing, right? So this common tier bars you can always have a multiple iPhone var flag with your telephone plan command or apply command then it will overwrite it, right? So then your code is just deployed into target environment, right? So this is interesting part of where you can have multiple environments and multiple accounts. When I say environment it can be multiple accounts or in a single account multi environments, right? So you can have a branching strategy like this always there will be a pull request and that will be approved. Then the static branches will be representing to your actual environment, right? Then the moment the code changes in depth the dev environment will be deployed. That can be mapped one on one with your pipeline, right? At the Jenkins level. So with that I'll be running out of time so let me quickly show you so this is a pipeline but this is not the end so you can actually integrate your orchestration with your ITSM tool. So which is typically your passing API calls to Jenkins saying that, hey, my project name is something, right? And I want to deploy in the target account is ABC XYZ account, right? And the my web tier, right? It can be any services or you can actually consider these as a approved modules, so you'll be having front end API gate or whatever you just combine it and stack it up as a tier then select that then which module version you want to deploy, right? So you can give the module version and then that brings the pipeline, right? So this pipeline is for onboarding a project so not the previous workflow that I've mentioned so always the day one, day two job, right? So you might have a project to be onboarded with under resources. How can you do that, right? So if you're given a developer to or your cloud team to do that you'll take weeks of time to codify everything but in a portal way you just selected it and your stacks are deployed and backend logic will be modifies according to your needs. This part is interesting, right? So this is for onboarding your project. The way it always starts with ISC repository for that, right? Then configuring a webhook for your repository then downloading a baseline code which you want to maintain in every account commonly, right? Then, you know, cloning the repository where you have your stack modules, right? So which you choose probably from your central module based on the tag that he has mentioned pull the code and add into the project repository then that creates the branches then all the way it goes and onwards the project. So if you look at here within a minute this can be go two minutes or three minutes you can actually provision entire projects with hundreds of resources with a pre-ready for multiple environments, right? And with that it kicks into the workflow since your code is already generated backend and it is added to the project repository, right? Now the project repository had a change which is approved, right? That kicks into the another pipeline which is our Terraform workflow. Then it does all the things probably the security teams they want to maintain I just skipped purposely so that this can be another stage for your security guys to see what we are provisioning as a security components in AWS your WAF or KMS or IM roles or it can be some other security services, right? So all this can be flagged here. In your plan, if any resources with the security probably I have mentioned only three year but yeah, you can go the list can be go very long if you want to send the notification to your co-fellows security team to approve it, yeah so this is a way you can do that in a pipeline then once that approval is completed then it goes to cost estimation then you will be cost estimation succeed then you will be going one final gate as a manual approval, right? Then all the way it deploys then they write the Terraform output now the infrastructure is provisioned now your developers need certain artifacts you know what is my lambda error? It is very accurate way you go and see in the console, right? Whatever you have provision you can actually send artifacts directly to the developer mailbox, right? So the last step does that so whatever infra provision all the data is pumped into his mailbox he can use those to deploy from his CI CD pipeline or maybe SAM framework or the serverless framework, right? Yeah, so this is the TF6 scanning tool which does the code scanning and always it flags out if any vulnerabilities in your code one of the screenshot here is the S3 bucket which is missing bucket encryption, right? So which mandatory to have so which is missing purposely I did so that's why it flagged out and the pipeline itself it went to end until you fix this issue you won't be able to go further, right? It's a hard gate for your pipeline even though your developer taking a code from GitHub and adding into the repository this can be flagged out, yes, okay? And, okay, so there are best practices I'll be sharing the slides guys I'm running out of time but I really wanted to show demo I like to give demos but unfortunately no time but yeah, I'll take one of the one minute to show that so I'll be sharing this slide just go with the God Rails and the best practices with that let me bring in my Jenkins portal then I'll just show you how quickly you can provision the infrastructure in live, right? Any questions before I jump into the demo? Probably one or two questions? Anybody? Yeah, please. Common secrets for the Jenkins and other stuff? Secret manager. C-A-C, secret manager. So that's why in my solution stack if you look at one of these services that we are using, secret manager. So for example, as you mentioned, right? Now I need to connect to the infra cost which is outside solution. Probably I need to, I'm firing a lot of API calls to the Jenkins. Jenkins again, it have a username and token, right? So where I need to store? It is in the secret manager. So whenever the pipeline, there are a lot of plugins in Jenkins that's a good way of having Jenkins as various plugins. There is a secret manager plugin which is available. You can always get secrets on the flow and you can mask those secrets on your build. You will be not able to see the secrets. So yeah, I'll be showing that probably if I get time, two more minutes. So this is the, can you see guys? How many are interested for demo by the way? Based on that I'll go with otherwise my cofails are waiting. Okay, I think there are plenty of numbers. So this number is awesome. Let me do that. Okay, this portal is not the actual one. Since I'm working on front end development, so it is not fully developed. So I'm showing the backend, the Jenkins portal which is equivalent to your development, the self help portal, right? So look at this. So always it start with day one, right? So day one you are onboarding a project. What all project needs? Probably having your user need to be onboarded, your repository need to be created, your pipeline needs to be created. Some baseline code need to be added into those repository, right? So all those things you can be automated with the flow. So with that, just go with this. Can anyone tell me the interesting project name something, yeah? Sorry? Okay, I'll be using AWS Community Day, okay? AWS Community Day, probably easy. Sorry buddy, okay? I can use any project in fact, okay? So I'm just making it relevant to talk where I'm in, okay? So I'll be taking a project name as this and then probably I have multiple accounts into my environment. I'm also using control tower to spin up and delete the account. So I can choose whatever account I want to deploy my resources. So based on the selection of days, your baseline configuration will kick in. So where I mentioned there will be baseline configuration for every account. What is the role that I need to use? What is the VPC ID? What is the artifacts of the infrastructure, right? All those will be copied into your project depository automatically based on your selection, right? So probably I'll select SpaceX or yeah, I think. Then let's take a front end deployment. So I just simplified with the three tier architecture but it can be your thousands of stacks, right? Modules and the way you select it, the options will pump in, right? So it goes up, the list never ends. So I have cut down to multiple things then probably I'll select only front end. So this is where I was mentioned, you can send all the artifacts to your developers or the DL who is part of your project, right? So in fact, the project onboarding details also you can send saying that welcome to XYZ automation. Your project is onboarded and these are the code commit depository for you to host your infrastructure code, right? So with that, let me click and there is a build running. So just a minute of time, it provisions all the requirements and also as a initial step, it also kicks the workflow, right? So if you look at 11 seconds, it has created code commit depository and created a webhook and downloaded baseline code based on the environment that I selected. Then cloning the project depository, then creating a branches, adding baseline code, getting Jenkins pipelines in fact, and all the integrations can be done automatically with the repository on your pipelines, right? And it goes and invokes the pipelines as well once it is completed, right? Yeah, so within a 52 minutes seconds, now your ISC details are pumped into the developer saying that this is the project and this is the repository and it has all the details to be ready for your developer onboarding, right? So the second part, second part is your pipeline or the workflow, right? So as I mentioned, this is the project that we have just launched. Probably it might have invoked because as part of my onboarding, I have triggered it, saying that once you add the baseline configuration, just go ahead and trigger it, right? So it is doing all the checks that I mentioned in my previous slides, code scanning, which is, yeah, which is good. Then, yeah, so then it is going and showing what is the plan. The plan is, there are 20 resources it's creating, but it can go more. So I just limited the code. So there are 20 resources, which creating as part of the frontend. By the way, in frontend, I have remote cloud front. Cloud front takes 30 minutes to deploy. So I have added API, gateway, lambda, and dollar world with some of the sample code, right? It can be anything, right? And all the tags also there. If you look at the tags, so you are ensuring your billing is going to be by end of the month is efficient and accurate. So then, yeah. And sorry, security buddy, I purposely skipped it, but there is a security services in this IAM role, which required for lambda to run, basic execution role. So that stage I skipped, because I just want to go directly to the manual stuff. Even though if a security team is not part of your game, you can actually have one manual stage, where you will be having a hard control to approve the infrastructure, right? Then the moment you have it, just approve it and you can do that. Just, yeah, yeah, sorry. Okay, so this is the typical way to deploy. And the moment you can have approvals ready and just go ahead and deploy it. So that's it. So it does all the infrastructure provisioning in minutes. So sorry for over time. So I have a lot of content to share. Thanks buddy. Thanks buddy, yeah. Thank you so much, Ashok, for the content. And you guys, please hit the show to Ashok for any support.