 session. It made sense and I was worried about presenting two sessions but it kind of makes sense that okay I'm getting it over within one go rather than okay having to worry about another session tomorrow. So thanks again for coming for this one. We in the earlier session we talked about the CI series parts of all of this and now we are going to look at the infrastructure side of things with terraform so again that's me but I thought that okay I shouldn't really use the same photograph so I used this photograph which is from Bristol from last week sorry this week earlier this week went on a very short trip and this was like you know we took an Uber to go to this like you know take us to this bridge somebody recommended that I should go to this bridge so we went there and we were looking for something to eat as well and you know it was all confusing and then the driver waited for us for about 10 minutes when we quickly ran through the bridge and came back and we then went for lunch anyway it was fun. So yeah I'm the same I am a director of Drupal Practice at Accelvent and contributed to the Drupal ecosystem since about since a few years. Let's talk about this session you know when I said you know how do we build infrastructure so what's the challenge of building infrastructure first thing it should be it should be effective we want to be you know we want to stay updated with all the current trends in infrastructure technology and as you know it moves fast right now AWS for example releases a new product to market I don't know how often but like kind of kind of works out to every week or even every day they kind of release hundred products a year so I'm not saying that we have to use all of that but we have to keep up with friends to make sure that we're using the right tool available for the job and on the tech stack side and I'm not talking about infrastructure now but on the tech stack with the solution that you're building it has to be effective for the same reason you know that for example JavaScript frameworks are moving at an even faster pace that's possible you have new framework that's a common joke right I don't need to say that again common joke that you have a framework every week probably true and not so anymore right I think it's stabilizing right now but yeah you know things move fast in the JavaScript world best practices keep changing you have to stay you know you have to stay with the trend of what's happening and the reason you have to stay with the trend not is not high it's because so that your team can work and your team can grow with that with the with the program that you've written the second thing the second challenge that we face is content and now this is more to with the with the specific project as well but apart from the technology that you used to build you know your content also needs to be you should be able to design content the way you the way you need it and that includes all of Drupal's features you know rich content modeling capabilities like paragraphs and media and a lot of all of these things and we want all of these things but we also want to keep cost in our picture so this is the this is you know we have to keep like it's just a constant balancing game you know getting all of these to work together to achieve your goal and the goal is or this is more of a solution not a goal you know so the challenge that we face was building infrastructure for a multi-site decoupled Drupal installation using Terraform let's break it up Drupal I already mentioned Drupal is I think you would agree that it's pretty good system for managing content it has each content structure you can define as many content types as you would like you have paragraphs and other mechanism of structured content and layout as well now with layout builder you don't need to learn paragraphs for that either content moderation is pretty powerful in Drupal I think one of the reasons why Drupal continues to be relevant today in the enterprise space so this project was for a for a major corporate in hospitality industry I can't name names here NDA's and stuff but this is a very major player in hospitality industry and of course you know you need support for rich content structure content and all that the media handling like I said the next thing is decoupled and this has to do with you know what I said about effective earlier Drupal is great for content and catching up let's say catching up on the front-end side of things so the obvious choice for us to build this whole I mentioned in my last presentation but I didn't mention it now but we built like five or six websites five or six different websites of a single Drupal back-end and it made sense for us to go to decoupled because each of the websites even the content was similar which is why we have the same back-end for all of all of these websites the back-end was similar the front-ends are entirely different so decoupled made sense you know we didn't have to worry about multiple teams and like no wrangling all of that things to all of those things together and it made sense the the teams that were responsible for coming up with design they had their own workflow their own schedule and I think that's one of the powers of a decoupled approach to building Drupal websites you know I'll talk about this again later in this case we used Angular TypeScript and Drupal as an API first service which is of course what Drupal is good at Drupal 8 you know when we were building Drupal 8 we often said that Drupal is a first-class CMS built on a first-class API server it allows team to work independently again we talk more about this and I think someone asked a question about this and how do you do this test and like no we started by defining an API contract which we hope to use in test we didn't get around to that but yeah next thing is multi-site and now this is more to do with the cost part of you know what we discussed earlier it reduces the total cost because now even though you have six websites six different front-ends you still have only one back-end and multi-site is a decent way to approach this problem and API site factory allows spinning up new sites on demand you know you don't have to need developer involvement to if you want to spin up a new new site on your multi-site installation ACSF takes care of that for you and the last thing you know infrastructure because of course the website has to live somewhere right finally Terraform because creating infrastructure by using a mouse and keyboard is I don't know 2010 yes so infrastructure as code that's a very common paradigm right now Terraform is not the only thing that does it but I think it's one of the one of those tools that do it well if you're purely AWS what is that cloud formation is a good candidate if it's if you're purely an AWS over here we are like I said in the last session also it was not just AWS we were also an aluminium and I believe there are other two other things that are coming up but I think still Terraform is one of the most widely used and most widely supported tool for different providers yeah idempotent idempotent is you might have heard of this idempotency in Ansible maybe and basically what it means is that whatever you create your script defines that perfectly perfectly is I know it's too strong of a word but yeah there's a one-to-one mapping about you know what does what the module is supposed to do or what the tool is supposed to do with your module and if an action doesn't need to be run it doesn't run basically I'm forgetting the dictionary meaning of idempotency so when we talk about decouple websites there are two primary areas we talk about back and in the front end of course in our case it was more like this we had one Drupal instance and multiple angular front ends you know and this was again at the time of when we were building it later we added new sites and some of the things that characterize this is is this sorry characterize this is that Drupal was built using a multi-site approach it was not a single content so it was not a single site where you classified content and each site would pick up different content it was rather six different Drupal installations essentially but it was a single code base in other words a typical multi-site and each angular front end was serve of an S3 bucket or an Allium OSS bucket so there is there is no server runtime as such you know HTML is just in a bucket and it gets out exactly as it is which means that when angular is talking to Drupal it needs to know the correct API endpoint and you know there are multiple environments multiple clouds so an angular front end on Allium in Dev environment for a particular for let's say let's say site one should only talk to site ones Dev environment on Drupal multi-site so that which is why we needed to have multiple build stages for each environment which we talked about in the last session so the way it all came together the information flows so to speak of the entire front end looks something like this so you have the visitor and you know you're going to this website something.com and it so this is this example is for AWS but Allium is very very similar because like I said Allium is like you know it offers similar products that AWS does it sometimes changes some of the mechanisms but essentially like the broadly it's the same you know like for example API gate but there is an API gateways kind of service in Allium as well so in terms of AWS whenever the visitor request a website like a site.com it hits the CDN and if it is not cached on the edge it goes to the S3 bucket where we have all the Angular app and the static assets you know the icons and stuff like that so if it is if it is already in the cache and CDN it serves it directly sends it back now the interesting thing happens that now the visitor has the HTML and starts to work with it and like of course you know when you're rendering the HTML file you have things like scripts and CSS files and images and everything right now all of these CSS scripts and all of those are they're simple they follow the same route because they're all static they're all stored in the S3 bucket so it's the same the same flow for loading all the static assets but then you get to the API and the API is also served of the same domain in many cases at least in AWS case it is served on the same domain so when visitor makes an API call so the Angular app now request Drupal for content and our main concern is performance with CDN the HTML files are cached at edge which is great but if you're going to hit your Drupal server for content you're not really gaining any performance advantage because the HTML may load but then it waits for the data that becomes a bottleneck so of course we want to bring that at edge as well so our architecture was that the visitor whenever request an API again it goes to the CDN which if it is not cached sends it to API gateway which in turn sends it to lambda function and API gateway because lambda function you cannot directly invoke it with you know on HTTP I don't think that's changed by the way but at the time it you can't so you need an API gateway and in front of it and the reason we have this is well let me come to that and of course the lambda function then directly invokes the Drupal back in and the idea is that whatever request whatever response Drupal sends it gets cached at the cloud front layer the CDN layer right so now you have cached output of API as well now API gateway has something like you know you can just mirror content you know you don't really need this lambda function but in this case we do need it because Drupal would generate content with its own internal URLs and of course we can customize it so that Drupal would modify the URLs before it sends it but then Drupal has to start understanding all the different URLs and the mappings to the real world URLs right it's a it's a decouple system right Drupal is not on the same like if it is site.com Drupal it might be something like their hyphen site hyphen act via site factory.com something like that that's not that's not the URL scheme but something like that so this lambda function replaces all of those strings goes through the response replaces all of these strings this URLs and sends it back to the visitor it goes through CDN which means it gets cached at CDN and final the final part of the equation is image assets as in the content which is the user generated content not the static assets and again we want to bring that to edge so it falls the same process the path is not changed but you know Drupal stores all the images and sites default files directly like that's how CDN distinguishes that okay this is an actual user image not a static image so anything with an act via site factory we have sites slash g slash files not default so anything with that prefix goes directly to the image assets I think that's about it from the visitor point of view and the editorial staff they would directly use the Drupal servers that's not it I mean of course they had their own layer of IP might listing and all that which made this whole thing challenging for us because this was blocked on IPs and you don't know what IP CDNs are going to request from right so it made it a little tricky but we got around that out of scope for this day so this is what we needed to build you know what we saw and we decided that we have to follow infrastructure support and terraform was an obvious choice like it says you know it enables to safely and predictably create change and improve infrastructure and the idea was that we knew that we're not going to get it right on the first attempt you know we we don't know like you know things requirements change right even if we knew what we were building requirements change constantly and like I mentioned earlier we were really building like 24 or 28 different websites so this entire thing that you saw this whole configuration had to be repeated 28 times AWS and like you know some on AWS mostly on AWS but some on Aliyun as well so infrastructure is forward is obvious really it's not practical to do all of these manually so terraform is has anyone come across terraform before most have right so I don't need to tell what terraform is you know it's a hashicop tool and you write files like sorry let's let's go through this yeah so you have infrastructure support which we already said you have an opportunity to plan your changes so it's not going to blindly just change everything without letting giving an opportunity to verify that okay only the changes can go through so basically a plan and then create infrastructure as per plan not like you can you can design an approval workflow we didn't need all of this in our case this you know the architecture that I showed you it's not too complex you know we needed a few resources to bring all of it together we managed to build all of it in a single module like one module for AWS and one for Aliyun not terribly complex but in cases where it does get complex you do have an opportunity you know like how you have code reviews for your code before it gets merged in you can have a code review for the plan not the terraform script I don't really know how what exactly is going to happen you can you can code review the plan itself so this is what a terraform module looks like you know I shouldn't say it is a script because it is not a scripted approach it's some it's it's where you define things not tell it to do things in this example you know I'm creating a resource called resource of type AWS S3 bucket we're giving it a name you know so we have a different bucket for logs we have a bucket for the files itself but then we have another bucket for logs and we specify a few things you know like the bucket name the ACL for that which region do you want it in and other things like you know we need of course we need encryption for that and then we have many of these providers to support this thing called tags so in AWS you can classify all the resources using tags and that helps in billing and everything so it's a good idea to you know always use tags so that you can always find resources one of the tags I always put is the so that it indicates that this was this particular resource was created with terraform so this is you might it's it's not really Jason but it was inspired by Jason and it's modified so that you don't really need don't really need to keep it as verbose as possible you know like you know Jason gets very verbose if you have seen composer Jason Files or package Jason Files you know what I mean compared to that this is a lot more readable right so it was modified to make the code much more readable and it is saved as a dot tf file so yeah main purpose is to declare resources one interesting thing about this is so this format is called HCL and it is compatible with Jason so if you want to for example if you want to machine generate this code whatever you call this if you want to generate it as a machine you don't need to write it like this you can actually generate it as Jason and pass it on terraform can read that as well because they are essentially compatible there is a representation in what will not go too deep into it but there is an equivalent representation of this entire thing in Jason like a resource node and then under that you have an AWS S3 bucket key and so on there is an equivalent representation okay so yeah so next comes the planning part you know you have written your terraform code similar to that and now you come to the planning stage of your infrastructure you would run something you would run this format terraform plan or terraform graph so terraform graph would you know like in the background you can see not very clear but you can see this thing in the background right it produces output like that and this you can use this to represent your entire infrastructure this graph is actually from for example itself it's it's not like just some graph I found it's actually the you know what what what we built in this particular project and you can see that you know even for a relatively simple thing you know the graph gets really difficult to read and the idea is not that you know okay you can you can use this graph output to read and understand what things are but you usually try to understand it in path you know you probably not look at the whole graph and okay this is what it is it's not productive it's not very clear but you know it will it will even represent things like variables as a node which makes it not that useful in my opinion but anyway the output generated by terraform plan is very useful many organizations have it many organizations which rely heavily on infrastructures code they even they even commit the plan output to the repositories so that you can always make sure that the actual infrastructure that's created somewhere matches the plan you can always go back to that not what terraform terraform run initiates but whatever is actually present in the infrastructure the plan is supposed to represent that and whenever you update your terraform files this plan will tell you that okay these are the things that needs to be changed so at a code review level for example you can see that okay oh it's going to delete this particular bucket the change that I'm making in my terraform it you may not realize that it's actually going to delete a bucket and create it again or delete an ac to instance and create it again it may not appear there are cases where terraform has to do that but you may not realize that it has to do that because for example certain attributes cannot be changed so like for example in that previous example that we saw you might say that okay you just want to change the bucket name but AWS S3 if I if I if I'm not wrong it does not let you rename buckets I could say for example there are certain things that AWS does not let you change and the only way terraform can continue with that is destroying the instance and creating it again and you might not realize that that's going to happen by just reviewing this code which is why ravine the output of the plan and verifying that it is actually going to do what you want is important so in that plan it will tell that okay it's going to delete this bucket and then recreate it and if that's a problem for you you can you have an opportunity to stop it and then we have an option to execute the exact same plan you know so even if the terraform script changes later you have the plan output save like I said it can even be inversion control and you can run that same plan not the script not the modules in our case we just ran the modules it was we like I said our application was not our setup was not very very complex it was very straightforward so we in our case we just ran the we didn't really have this stage of reviewing the planning very fine now once you have the module setup or or you have made a change to this module you would run terraform apply to actually start making these changes and you would get an output and you can verify that like you know again what you see in background that's what it looks like the first time you run terraform apply of course it's that's when it's going to create everything but the like if you make a change in a module and rerun it it's only going to change what has actually changed and terraform is pretty flexible it supports things like variables and you know you can not inherit what's the way you can use other dependent modules and if anything changes in the graph terraform knows exactly what to change it will like I said it important you know it does not need to go through the whole thing again and then of course test it you know that it works that your infrastructure works like I mentioned in my previous session you know we had terraform as part of our CI run so it as a CI run you know during CI it would get the correct it would get the relevant terraform module run terraform apply and then it would it would verify that okay you know the deployment works on that updated infrastructure so let's get back to resources resources are like the basic block of what makes it important you know and it represents some real world resource we see this example S3 bucket you know the other resources which are very common easy to instances for example or I am permission roles or a cloudflare distribution sorry cloud front distribution so all of these are different resources they have a single type that defines its behavior this case the type is bucket so it we are creating a bucket over here that's what it says and resources may depend on other resources so again terraform builds a whole dependency chain that like for example there are no dependencies here but you might want that before a cloud front distribution is created you need to have that bucket first from the earlier graph we saw that the bucket is that what actually you know the cloud front distribution gets all its content from the bucket so you need the bucket first and then the cloud front distribution so terraform can of course do it in order and the things which can be done in parallel terraform can do that in parallel next concept we need to know is providers so there is not a single provider that it's not just AWS we have Aliyoon I mean in our case we had Aliyoon but generally you have GCP or Azure or digital ocean line on and like so many lean on line lean on I think that's yeah and so these are all implemented as providers and your model needs to define this provider that okay what's actually you can have multiple ones you know you can have a single model which works with multiple providers in my case you know and as a best practice I would just have one provider per module and like for example AWS and of course you know to work with AWS you need these things secret key which region you want to work with and this version is the version of the provider it just says like 1.54 I think now even have two but yeah it's like you know greater than or equal to 1.54 and then this block it's so you know like I said terraform will be able to track all of these changes that it has created infrastructure previously and you run it the next time it knows that it's already created did not try to recreate it so it has to store this somewhere and by default it stores it in the same like you know from where you run it on your machine excuse me on your machine where you run it it stores as a state file tf state file right over there of course that doesn't work in a team environment you know in CI for example you can't have that tf state anywhere and one of the best practices is that back in should be somewhere in a shared location like s3 bucket so this in this model for example I specified that the back end should be s3 and actually it's I'm not extending that over here like I'm not showing that over here but you need to define what bucket and which path is the tf state stored in my case like I said we had like 24 different websites which means we had like 24 different states so we had this whole system of you know all this for each cloud and each site each environment we have a different tf state file by default if you don't specify this this back end s3 it stored as a tf state file on your machine and you need to keep that I mean if you're interested in keeping infrastructure current yeah you need to keep that but it's not a good idea to commit that to your version control because it stores the tf state file is store secrets but your providers like I said offers a set of resources so AWS has its own resources s3 bucket and all that I'll you has its own set of resources you know is so it's it's not it's not like that you create one module and it works for all providers you have to have I'm like maybe one day we have I don't know but right now you for each provider you need to write a different module basically whatever infrastructure you're planning to build for each provider you need to change the model and they have their own configuration like we discussed state so state is we discussed the state is where the terraform stores information at about infrastructure this is how it finds out that okay there is a change and it's local by default but can be stored remotely like I said in this example so one of the ways again you know when you're running it in c i where you are like of course you can't have like the bucket name is could be different the table name could be different and all that so again you can put it in different file like for example backend dot tf what so terraform will load this file automatically and put make all of these available as variables which brings us to variables they are parameters so you can define as many variables as you'd like and you know the things that you don't define a default like you can you can define a default as well like name prefix for example you know so name prefix we use you know sq buckets have to be globally unique you can't have two buckets which are which have the same exact name anyway so we added a prefix for all of the buckets so that there's no chance of collision then the base protocol like default is htp and now all of these variables you can overwrite that using the tf what's files which we saw earlier or using from the command line when you're initiating terraform applied you can specify the values of these variables yet typically named variables or tf but it doesn't matter again terraform will look at the file like you know how this all these files are stored and so yeah we look at how this all files are stored and terraform typically reads all of those files so one of the things that I did over here was creating a wrapper script that sets all of these variables and it's a common practice it's not heard of that you would have you'd not directly invoke terraform apply the command gets really long like terraform apply and variable file name and variable file name or additional variables we just had a wrapper script which we would run my CI and it will take care of setting the correct variables as it should there are multiple ways like I said you know one of the ways is to even use environment variables so CLI environment variables or a variable file you can you can like for example any you can put all the variables in the file and then pass it using a command line option and when I say module it's nothing special about that it's just a bunch of tf files in the same directory when you run terraform apply in this directory reads all of this tf files loads it and executes the it builds a plan from whatever resources it can find over here at the same time it will resolve all the variables and you can you can also load like other variables like back end tf words for example you can load all of that now there is no real convention so at least I have not found a real convention except that typically each of these modules have a main tf file that's the only convention I could find then you're completely free in my case I split the terraform files based on the resource so there was an API gateway tf so all the resources that are related to API gateway are there and you know the example that I gave about S3 bucket that was rather simple but for certain resources the examples get really complicated for example API gateway is not just the gateway itself but API gateway has an endpoint and then there are like methods for each endpoint and then like there are proxies and all that so all of those are implemented as separate resources and all of them are in this file and similarly there are different files you know for cloud friend one output it's just something you know so whenever you run terraform or apply at the end I mean of course there are a lot of resources that it created and you can get attributes of all of these resources like you create an EC2 instance for example and you need to know the machine name that got generated so if you want to find that machine name you can read that attribute later but if you want to make it simple you can use this directive called output which will like just at the end of the script will tell you that okay that this is one of the outputs from your script. Variables and this are just the name basically the point is the names can be anything and then the wrapper script you know sets variables correctly depending on the task like you saw there can be a lot of variables and one variable which is not set correctly could create unpredictable consequences you know I mean of course you have planted everything but then the wrapper script makes sure that you know that the mistakes are minimum like I said earlier you know automate you know because it's very easy for people to make these mistakes initialize the current correct remote state back and in our case we had 24 28 36 different environments we were targeting we didn't want to deploy one side to something else you know another bucket it would have been embarrassing to say the least so of course you know initialize the correct remote site remote state back in so if you are deploying to site one the bucket name can site one dev it sets all the variables that that have to be set for site one dev the bucket name and the the cloud friend distribution name and everything and then it looks a little bit validation also before passing it on to Terraform you know so you can like for example our branches with very specific the name dev dev one QA and sorry dev dev one stage and production you can't use any other so like you know whatever you can validate validate before sending it on to Terraform we already saw this in the last session but you know you know in case someone's new we would run Terraform on every push so whenever someone would make a change to one of the front end applications Jenkins would apart from other things you know like the building of the front end application itself it would also invoke Terraform and this was important initially because we changed the architecture you know we discover things that didn't work or the requirements change and it was important to do it initially because we couldn't make sure that okay this changes got rolled to rolled out to all the 24 of those websites or 24 of those environments but eventually we could remove it we didn't need to keep the script in like I said before running it in here requires that we use a remote state you can't put without that so one of you know we had like a lot of learnings one of the things that defined this project for us was how hectic it was we built well not just this pipeline this pipeline we kind of I think the entire pipeline we built it over a month but the entire project was around three or four months long which was I mean it was really hectic so what helped us was not tools you know I said this in previous the other previous presentation as well you know it's not tools that makes DevOps it's that it's the people it's the culture so one of the things that I keep telling about decoupled people implementation this is is not that the technology if the teams that define it and it's for example you know you may need when I think I'm coming to those points yeah so one of the things that we need to do is build an API definition in API contract that both the teams agree to okay and this was important because the teams work work in their own piece the benefit of decoupled is that okay the technologies that benefit is there of course but the benefit is that the front-end team and the back-end team can move their own piece they don't have to be locked in the same sprint they don't even have to have overlapping sprint you know you could definitely have like a front-end front-end team working on sprint of two weeks and the Drupal team working on a sprint of three weeks depending on what works for application as long as the API contract is there the communication is there between the teams it works another thing is that you know we we knew that we had to implement infrastructure support but it was difficult to sell it to an organization like I said which was which is a large hospitality player which has been around for years it's difficult to sell them on the whole process because again it's not about tools it's about the culture if the organization doesn't recognize that that we like that the entire team not just one person not just the team lead of one team adopting certain set of tools if the organization does not identify that it's very difficult to make efficient use of all of these tools it's not it doesn't help if only one person is lucky if like only if only one of the sites use terraform in the other sites they are building it by hand it doesn't work it doesn't it doesn't help anyone because the second site will definitely lack behind the other sites and the learning is that not all providers are created equal terraform is an open source project all the providers are open source so it's it depends on volunteers mainly you know those volunteers are maintaining each particular implementation now something as popular as AWS that's of course immediately updated any product that you know I think next day you would have a provider for that but for something like Allium it was surprising really that okay there was no terraform support for certain things that we wanted to do in Allium but even the Allium CLI did not have that support which was really weird for me and even the Allium API in fact we even considered writing a script like okay the Allium CLI does not support that particular feature which is there on the website it doesn't support that feature let's write it let's make it API call directly but even that was not working so these kind of things do come up fortunately if you're working with things like you know AWS mainly then you don't need to be worried the providers stay up to date the API stays up to date yeah and again it's really about the communication it's the API communication one of the ways you formalize communication between two different teams is API contract if you're building a decouple website I would suggest strongly suggest get to that first build a contract and get the buying from both teams front-end team in the back-end team to fulfill that contract that okay if anything changes the communication they know how to communicate and that's the only way to have a successful decoupled project not any of these tools so here are a few resources you're welcome and you're welcome to find more information about terraform, HCN, any of these from the slings you can find me at Christian web pretty much everywhere Twitter mainly I'm active on Twitter and that's it any questions I'm curious how do you manage your secrets in terraform for example AWS keys and things like that for example a tool like Haspel has the vault service that lets you encrypt your keys and things like that but from my research I couldn't find anything on terraform or maybe I'm just curious how do you manage those secrets because it's very important to keep those things safe yeah so one of the problems with terraform is that the secrets get stored in state and that's why I said that okay you know you would commit your plan output but do not commit your state but the good thing is that that's the only finally you need to worry about keeping it safe now there are two parts of the secret management what is the input of secrets like you know getting that secrets to terraform and their world is possible one of the ways is variables of course you know so in the in the at least in the CI process we injected those secrets if you remember from the previous session for using Jenkins where we store secure secrets securely we injected it from Jenkins to the CI where it ran and then of course you know it took it from there Terraform took it from there and like I said it stores it in state if you wear it if you don't want to use Jenkins or if you're using something else you can use vault definitely that's one thing but that even if you use vault over there you still have to worry about secrets that get stored in state file which means that the wherever you're storing your state that has to be secure locally of course it is secure but it's not safe right safe as in you know you can use your laptop right so you have to like you have to find a place where you can store it securely we went with S3 bucket which was completely locked down like from all AWS permissions and all that still not the best way to say you know secure state file. I'm not really considered any alternative but it would be great like you know you can just do the state file in vault for example and like Terraform query from vault every time I didn't really explore that I mean I considered Terraform to be great but when it comes to the secret management I find it a bit racking. If you have the vault which does the job for you and you can just safely store and it just imputes the secrets for you and you can use it without having any issues but this one it's a bit different as you can't really do that so you need to think in advance on how I'm going to deal with my secrets which is very important especially with AWS because Terraform will need to have loads of permission to be able to build those that infrastructure for itself. So the thing is I know Terraform like again this is at that time when I was building this they might have only made this improvements by now but I know that they were looking for better ways of storing secrets in the state file specifically not storing secrets when it doesn't need to like for example AWS key doesn't really need to store that but it used to at that point of time it used to I don't know maybe by now they've already updated it to not store those secrets some secrets it still needs to store right which means your state file is still important but something like you know AWS key access key it will not be storing it in state file anyway at this point. So yeah, thank you. Any other questions? All right. So like I said any questions anything you want to talk about you can find me at the same way or you know the rest of the day tomorrow probably and thank you everyone.