 Next up is Paul, Paul Stack, and he will talk to talk to us about Pulumi and infrastructure testing. Apparently it's a real thing. Okay. Oh, nothing. There we go. Can everyone hear me at the back? No, no. Okay. I don't know what's going on. I have a very loud voice so I can shout. This is a 55 minute talk. I don't know why I got the short straw on a 55 minute talk when everyone else got 25, but I guess we'll give it a try. So this is a talk called infrastructure testing. It's a real thing. Hands up for infrastructure tests. Less than 10% of the room. This is why this is actually a relevant talk. So thank you all for coming. I work for a company called Pulumi. You had the Terraform appetizer. Now you get the Pulumi main course. But it's not really a sales talk for Pulumi at all. That happens at Config Management Camp. I'm just kidding. So in the beginning, software developers have been including testing as part of their development cycles for a long, long, long time. And unfortunately, the ops people, the infrastructure management people, I'm going to call them the laggards. Okay. And I'm really going to say it's the laggards, unfortunately. But we are in a part of the industry that hasn't had testing in its forefront. We used to have very specialized people who knew how to do very special things. And they were the center of everything that happened within operations. Now what I mean by that is we had networking geniuses, we had DNS geniuses, we had people that understood the data centers, the racks, the power situations, the storage. We had all of these different things. And because everyone had their own speciality, it wasn't really a need to bring in an automated system for testing what they were doing. It was what they did, it was what they did day in, day out. And that was their area of expertise. Now, in the same time, the developers actually went for manual QA testing. Anyone still work at a company where you have manual QA testing? Okay. We need to talk over a beer and work that one out. So most of the industry have moved from manual QA testing to having testing included in their CI pipeline, their CD pipeline, included as part of their development workflow. We broke down the barriers between devs and QAs, and it became part of everything we did. We're now at the point where we had, in 2007, Martin Fowler, who's like a software development thought leader, actually wrote a post about mocks aren't stubs. So we were at the part of the industry where people were starting to mock out integrations in their software between the code and the database or the code and the file system, all these different things to actually test that what they were building was fit for purpose. Not again at the same time, we didn't have the same evolution in ops. But in development, we got to the point where we understood what was called the pyramid of testing. So we had levels of testing within the pyramid starting at the bottom. We had unit testing, then we had integration testing, then we had end-to-end testing. Now, what I mean by a unit test is testing its code and its isolation, not calling its dependencies. So if you have a function and that function talks to the file system and the database or the cloud or anything like that, you mock out those interactions and you test the function in isolation and you can mock the different interactions and you can check that the code adheres to what it's supposed to do under different conditions. Then we had integration tests. And integration tests, excuse me, I'll go back a second. Unit tests were fast, very, very, very fast. You could run hundreds if not thousands of unit tests in a matter of seconds. Because they had no external dependencies. Then we moved into the level of integration tests. And integration tests allow us to check that pieces of our system were working together. So we could actually call the database here. We could actually call that the code was able to talk to the database and connect to the database. And these were much slower. And we started doing crazy things in our industry like only making our integration tests run on cron per night or twice per day or different scenarios that allowed the feedback to be a little faster still. And then we realized that the integration tests were still not enough. So we had to do end-to-end testing. Now what I mean by end-to-end testing is from the outside of the application in the whole way through whether it talks to the file system, the database or whatever and actually adhere to what was going on. Now it's a pyramid for a reason. Because the comprehensiveness of the tests actually goes up as we go up the pyramid. But the brittleness of the tests also goes up as we go up the pyramid. Now you can have tens of thousands of unit tests and you can change a line of code and you can break tens of thousands of lines of unit tests. And it's pretty simple in order to actually refactor and update them. If you have tens of thousands of integration or excuse me end-to-end tests that talk to your system from the outside going in the chances of you being able to fix them all very easily is a lot less. And then we start doing things like deleting them or adding skips in our testing. Does the same hierarchy of testing work and operations? Can anybody tell me do they follow a similar style of testing within the infrastructure or operations style systems of their company? Two, three people. Those three people I will buy you a beer. Like actually because it's quite cheap here but not in the city. Now we haven't gone through this same evolution of testing. CICD for developers was a huge thing way back in the early 2000s. And if you ask the developer excuse me an operator and I'm going to class them as like traditional operators people who used to think that we used to feed them pizza under the stairs and stuff like that before the evolution of DevOps of course. But if you ask those same people did they hook their scripts for infrastructure up to CICD they would probably ask you we manage the C... they would suggest that we manage the CD tool we know our script shouldn't go in there. And then we've got to this point where we have the rise of infrastructure testing. Now it's not a new thing. It's really not a new thing. You can see people are doing it today. And there are tools that have been around for a long time. Really a long time. Anyone use Test Kitchen for puppet back in the day? Quite a few people. Anyone heard of server spec? Yeah. Chef spec. These are all tools that have been around and actually part of the ecosystem for a while but they were very much picked up by a small amount of people in the ecosystem because they had the freedom to do it they were working... I'm not going to say more on Greenfield products but they were in a scenario where they could actually build this into their pipeline into their scenario. And what it ended up doing is it gave us this as the normal testing pyramid for operations. Now this only is acceptance tests and I'm going to call them acceptance tests because it fits into the ecosystem model I created before so you can try and relate. So it's the middle segment of the last testing pyramid but unfortunately this is the area where we were testing after resources had been created. We were actually testing that the tools that we used were doing the job that we asked them to do which is a valid level of testing it really is because if you're going to outsource the tool you expect that tool to work and that's okay. But the better testing pyramid for ops would also include a level of unit tests below. Now what I mean by a level of unit tests is that in this previous scenario you would actually have to... let's pretend that you're deploying to the cloud for example you would have to have credentials on your machine as a developer or as an operator or an infrastructure person or a DevOps engineer whatever you call yourself that would allow you to talk to the resources that you created in the cloud that you created them in. Okay so it was after the fact and because it was after the fact problems can already have arisen who has heard of GDPR I'm sure it's the bane of your life if you're in the infrastructure world but because we wrote tests after the fact if we wrote tests that basically says hey this resource that we created needs to be within a specific region in our cloud and someone created it and it's already created in the default AWS region which is US East 1 we've already potentially broken GDPR because we have stored information somewhere where it's not allowed to be stored so after the fact is only really part of it so we needed to interact or integrate these level of tests which are unit tests so it's about understanding that the code will do what it's supposed to do rather than has done what it is supposed to do and they're extremely interested in the reason why now DevOps is brilliant brings the developers and the operators together what about the per security people ok and we have the rise of DevSecOps which is a huge part of the industry right now and there's all sorts of things happening but without embedding them in what we're trying to do and testing after the fact we can do all sorts of crazy dumb things in the cloud and I really mean things that to you look sane to the operator to the security people are not and not only that they can cause your company big problems this just makes me laugh in so many different ways because there is actually an admin at kremlin.ru account spotted on thousands of Russian MongoDB servers ok I'm not pointing any fingers but I just think it's hilarious but it's more the fact that it's unsecured MongoDB databases were the source of information for this anyone use MongoDB yeah you probably have this a lot kidding I'm joking so I pick on MongoDB developers a lot and sometimes other types of things then another one people who deployed an elastic search cluster actually leaked the data of 6.7 million people in Ecuador and this is like this was what 2019 this is last year and in the three months or four months since that happened we didn't learn our lesson and it just came out two weeks ago that Microsoft actually exposed the customer data conversations of 250 million people in office 365 and again this is based on elastic search now I'm sorry but if you're doing your unit testing and your integration testing that's brilliant but if you're not introducing some level of security testing or security validation or basic sanity validation then you need to really start to think about how you can integrate that in 2020 we are extremely extremely bad at security in our industry there is a very special sect of people who live it they breathe it and without them we would be in a much worse situation any security professionals in here two that's the people who we owe beers to right there and that's the people who we need to help integrate into our existing pipeline and our existing flow I believe the best testing pyramid should be this now I'm not suggesting that you have more security tests than acceptance tests and unit tests but the creation and the management of our infrastructure is actually only a small part of what we do on a day to day basis keeping our systems secure keeping our systems running is a bigger part of what we do and if we can do something in our industry in order to keep that working then great so I'm going to show you some demos for these scenarios the demos are using a tool called Pulumi quick disclaimer Pulumi is not a paid for product this is not a product pitch in any way shape or form you can do the same thing with using tools like Terragrunt and so on and so forth and Apache 2 license is open source and it is not strictly a wrapper around Terraform that allows you to write your code and type script and I will demo that because I'm actually writing it in JavaScript so and I'm going to start very easily and very easily will be is that I have a Pulumi product you need to be able to see this I apologize I have a Pulumi project there we go very good and that will just allow me to create some infrastructure now I am going to create elastic EKS clusters ok Kubernetes clusters in Amazon and therefore we know they're slow as anything so I have pre-created them otherwise it would be a very boring demo and you would actually have to watch the screen just sit there and go next, next, next next, next so we're going to skip the creation of the clusters but I'm going to do a few things ok so I'm going to say fuzz them one and and it is called a bucket so by default Pulumi is an infrastructure code tool that allows you to keep your infrastructure code declarative allows you to lay it out in a real language but it is actually under the hood managing it in the same way the Terraform would manage it this is using TypeScript just to show you that's there now everything is sub-packaged so I can say aws. and I can get all of the sub-packages so let's just continue with S3 but the most important thing is that it's real code so you can actually look at what's happening under the hood by stepping into the integrations and I can have a look at the arguments that are required because everything is there now back to what it's doing so what this is doing is this is going to create an S3 bucket with the name fuzz them testing bucket and set an ACL of public read anyone have public read buckets in their organization ok let me rephrase that does anybody have real supposed need for public read buckets in their organization and actually have them ok so there are other people who just leak public buckets all over the place and you can search yeah so there's a load of stuff on the internet that will actually allow you to go and find public read buckets on the internet and it's the clardest so it allows you to do it and then I'm going to say I've already created these resources because this is not a Pulumi demo I'm going to say fuzz them oh wrong one fuzz them crap cluster actually I actually called it my crappy cluster because it is a wrapper around the cluster that is not deployed inside a VPC it doesn't have subnets it's basically public facing in every way shape or form don't create any case cluster like this please ok and then lastly I actually created a different one called a better cluster which we can see is we create a VPC and then we pass the VPC ID and the public it's public subnet IDs in here but it's just to demonstrate I haven't created private subnets but this is based on something inside Pulumi called Pulumi crosswalk which is an opinionated API across the top of a VPC it creates sensible sensible defaults and it sets a version ok because the one before doesn't have a version now this is really important to talk about there are a number of companies who do not allow you to use the latest and greatest version of either databases or any type of software that are out there because they have to go through compliance checks they have to go through different pieces so if you're just setting no version then you could get yourself in a little bit of a problem anyway because you're potentially going to annoy the security team that's there and lastly I can actually save that and I can go back to my code and I can say Pulumi up and let's just check I haven't broke any of my infrastructure I have no internet where's my internet connection going one second it's a super secret wifi password so I do not work the next demo is even better because you can turn off the internet connection go away there we go that's better so let's just run Pulumi up I'll take a little swig of beer and it still doesn't work it's still thinking about it it's thinking about it there we go that's better so it does the same type of thing under the hood as Terraform it reconciles the state with actually what's going on and what's not but effectively it tells me that I have no changes in my infrastructure okay so it's not this is not a Pulumi demo if you come to config management camp come and have a look at how it really works but this will tell us that nothing has changed because I have existing infrastructure that is pre-created I really hope so let's go back to the code while that's finished and run in and we're going to add this idea of specs now anyone ever heard of BDD behavioral driven development where you sort of define in your code in a user agnostic language like what it's supposed to look like and what you're actually supposed to adhere to so I'm going to create a new file and I'm going to call it index.ts and I actually renamed this snippet recently because it was something that could have been annoying for people but so we're going to call it mocha testspec and we basically are using mocha which is a script test runner and it allows you to write specs in a very simple way using another tool called chai so that one's not interesting but now we're actually going to write a real spec and I'm going to say bucket.spec.ts now remember this is testing after the fact resources have already been created here and we're now checking to make sure that they've been created as expected so I'm going to say fuzz them one bucket.spec and I'm going to say import that's better now we're in the right place so fuzz them bucket.spec perfect that's much better so we're actually going to import my bucket which we'll look at in a second we're going to wrap it in a condition that basically says if it's a polumi dry run then forget it because there is no infrastructure that has been created here so polumi will fail we can start to write specs like it should have an exact name it should be in EU west one it should have a private ACL okay so these are making sure that the resources that we've actually created adhere to what we've actually asked it to create and we can of course go back straight away to the code that we've looked at and created and we're going to say well the bucket name is correct but the ACL is wrong so we're going to fail at least one test and then we can actually add the check about where it's been deployed but the most interesting thing is at the end of my excellent at the end of my somebody have a question I thought somebody shouted someone at the end of my code I can call the function run tests and as part of my code run I can actually and you'll see very shortly in the top right hand corner you'll see that it starts to run mocha at the top in very like 3, 2, 1 hopefully now please please there we go you can see it here it's just my screen is too much but very shortly we'll get an output based on the back of it now of course that's only one set of specs that went with it let's add in some more specs because we still have this thing called my crappy cluster eks.spec.ts and we're going to say bucket bucket spec and we're going to delete this and we're going to say fuzz them one eks spec and we can really start to take advantage of and look I can just import my crappy cluster and lastly a thing I have to do is import star as AWS from at Pulumi AWS just because of what it's actually doing but we can start to run and say look I need my version of eks to be 1.13 because that's what the security part of our organization adheres to I need and if you don't do this today please do I need my infrastructure in Amazon to run in its own VPC not the default VPC that you get with your Amazon regions and we can write these tests and of course we can include these tests in our output but if we go back to our last code we'll actually see very shortly I'm actually exporting a ton of stuff for this demo come on this is now where I go off the screen because I have cube config and you'll see what comes out the back of it now this is testing before the fact excuse me after the fact and after the fact is one of those things where it's kind of difficult to do of course you can failure build you can have a specific build which actually runs this again and again and again to make sure nobody's changing any characteristics of the resources that are created within the cloud but it doesn't really give you anything so we'll go and have a look at a better example any questions so far anybody care oh question so the question is I could do this in a staging environment and then it's not after the fact yes the trouble is is that not everyone adheres to the fact that production is the same as staging I wish we could guarantee that was the case but in every company I have worked in including companies where I have been involved in a team that isn't the case and that could partly be done to the fact that I'm also lazy as well so it's just a human problem but you are 100% correct you could test and you could verify and then promote the build and so on and so forth so let's think about how we can do this in a different way now it's very much a case of we have an infrastructure and again just to show you that we are not a wrapper around Terraform just for TypeScript we actually allow JavaScript as well but we are in a situation where you can actually write your infrastructure as code in a number of different languages and you can actually bring all of these languages together to write common tests because if you can give people the common tests or a central place for common tests then you can test your infrastructure whole scale now the last one actually caused me to spin up EKS clusters VPCs which included public subnets which included also private subnets which had elastic IPs and that gateways and it is immediately costing me money in my development cycle and takes approximately 20 to 25 minutes to deploy the EKS cluster and that for me is not acceptable that's not what I want to be sitting spending my time doing although I can drink beer at the same time but that's okay but the fact that we can do that means we should be able to mark what is going on now this is where we are working right now inside Pulumi we believe that because you are writing in code in an ecosystem that manages that code and runs that code you should be able to write tests in the same way as other people within that ecosystem write their tests okay so I'm going to write some this is actually how I would declare my infrastructure but I'm going to write some javascript tests in order to actually test my infrastructure so if I run Pulumi up in this this will go and create an instance and so on and so forth now I want to prove to you that I don't have anything grep AWS access key no AWS access keys on this part of my terminal this profile right now okay so what we're going to do is we're going to create a new file and we're going to call the file ec2 tests .js now the first thing that we actually have to do is we need to let mocha equals require mocha let assert equals require assert and then let Pulumi equals require at Pulumi slash Pulumi so we're actually bringing in the modules like we would do in any test and framework and everything we do now going back to my code the first thing I'm going to do is I'm going to create a security group and the security group is allows access to port 22 from everywhere and it allows access to port 80 from everywhere hmm can anybody see a problem hmm hmm okay and then what we're doing is we're deploying the instance on the default security group or an ec2 classic which would again allow it to be accessible from the word hmm did we see any further problems here so immediately what looks like some it's basic infrastructure code can cause us a number of issues right here because if this is like a server that that people can access inside your network then this is a big problem it's a big problem and if you're doing this right now please try and not do this anymore okay so then we're going to say false them to mocks so we're going to mock the cloud okay and it is literally mocking the cloud so we're going to say that whenever anything from Pulumi wants to create a new resource or update the resource or read the resource of type ec2 security group then we're going to return this to ID any inputs that you require and we're actually going to set the arn of the resource and we can do the same of instance and you can basically do this the whole way down right you can mock any resource it doesn't matter it's not about what this is actually doing and then we're going to say let infra equals require and we're going to say .index.js and then last day we're going to say false them to specs let's just make sure I haven't oh no I added it twice I apologize so what we're going to say is that we're actually going to write our specs against our infrastructure here so we're going to say we're going to describe the server and we need to say that all instances that are in the index.js file have a name tag anyone do cost allocation within their cloud does anyone get extremely frustrated like I do when people don't add tags to their resources yeah it's annoying right it's great this will actually catch it in advance as part of your CI pipeline if people have not added tags so I could very quickly change it from the name tag to a cost allocation or a project or whatever you call it internally in your company then we want to do things like must not use user data so I'm very much somebody who agrees with a mutable infrastructure I love creating new AMIs those AMIs are launched they have everything they already need they're well tested and everything is there so I don't like people using user data on the servers I manage and create so we've written a quick check that it stops people from doing that lastly you must actually have a name security group but the most important spec of the lot in this case is it must not have port 22 open to the internet okay so it's you know it's everything is there there's no problem but how do we actually run this so I'm going to run the command Pulumi no JS stack okay so I'm mocking a stack inside Pulumi that's the equivalent of a Terraform workspace let's not get into that conversation because I know I'm talking about a big fan we're going to say the project which is the project within what you're trying to run it in again it's an implementation detail of Pulumi itself but lastly we want to run Pulumi in test mode which means it will not talk to the clouds because I have no credentials on here and then I'm going to run the file ec2tests.js and if I run this we can actually see that right now we're actually failing every test and we get an output of why so the first thing is we're missing a name tag okay next thing is we're actually using user data next thing is we're using an illegal security group and lastly we have port 22 open to the world okay this is a really simple validation unit test check that we as infrastructure people or if we have been forced into granting access to those wicked developers to actually launch infrastructure in your cloud that we can force them to build before they actually even run anything and of course ourselves as operators if we can make it difficult for the nasty developers to create instances before they can cross problems then we should okay and this is a great way I've been able to do that so that's before the fact and then we already did after the fact let's look at the security side of things okay let's look at how you can apply policy as code anyone ever heard of hashicorp sentinel yeah so sentinel is a tool by hashicorp that allows you to write security policies around the infrastructure that you manage okay of course we wouldn't be a good competitor if we didn't have a similar thing but again this is open source you're not forced to pay for this okay and that's the most important thing now I have a much more interest in architecture here so the first thing is I'm going to call the file s3 and then I'm going to call compute because we're actually segmenting the difference between like storage and compute and if I have a look at s3 we're actually creating an s3 bucket that is a website that has a default index.html that has server-side encryption using a KMS key ID okay and of course we haven't created the KMS ID so we're just hard coding the ARN for the string in here and then we go inside compute and compute now you see in the power of Pulumi okay so we have a VPC which we're passing base tags and the tags will actually append the tags the whole way down but the interesting thing is for each of the zone IDs that come back we can map a subnet to that zone ID give it a specific cider block based on the map number and then lastly we can actually map an instance into each subnet this is technically how you would build an architecture in Amazon okay you would load balance your VPC across all of the availability zones or a number of availability zones within your region and then you would deploy an instance of your application into each of the availability zones for high availability okay not ground breaking stuff by any means and then lastly we just push out some data now there's a few things that we can test here really a few things that we can test here and we have this idea called policy as code okay and policy as code allows us to do a lot of things so the first thing that it allows us to do is subnet sizing one of the things that we do very badly within the industry is we just choose random subnet sizes and we don't understand the connotations that happens when we run out of address spaces inside the subnet and to remap things around can be crazy painful so I know a lot of parts of the industry right now that actually try and enforce policies that stop developers from choosing subnets that are too large and they're tried to be kept to below slash 24 and so on and so forth so we can write a policy that says each of the subnets that we create must be less than slash 24 and the policy is there and based on the code that validates the policy it will actually run it and update it then we can say each of the instances just for the people at the back each of the instances that are created must have specific tags of name business unit and cost center okay again so you can track within the organization and then inside our S3 policies this is where we can start to do things like bucket must not be publicly accessible this is a basic security concern this is something we must be doing if you're using state in your bucket the bucket must have versioning enabled or if you're pushing any different files in there we should have no static website hosting because there's no need for it maybe you're hosting your own blog in your company's S3 buckets and they don't know whatever and lastly that you should actually have server side encryption with KMS enabled these are company mandated policies that every piece of infrastructure in S3 or that's chosen pieces of infrastructure in S3 for this demo actually need to adhere to and the same for the compute things now you can even take a step further you can say version of my SQL that has released or that we're deploying into RDS or into whatever cloud you want it's not AWS specific must be less than 5.7 because our DBAs and our security team haven't validated that this goes against all the different practices of what they're doing now we can actually do that by going in here and we have this idea right now where we have Pulumi experimental sounds like a Harry Potter thing it's not all it is is there are a number of features that we're testing but we don't want to stop the deployment of our tool so we're just going to enable Pulumi experimental equals true ok and then based on that I can say Pulumi preview policy pack and it's called the folder is called policy as code now policies can be mandatory which will fail the build or they can be advisory ok things that you want people to think about but that it doesn't care ok it's not as urgent as something that's like a problem now we can run this this is of course can be run inside your CI tool continually or as part of a pull request before anything happens and anything is deployed and we will get an output that tells us what is wrong with the code that we have created hopefully it's running I promise it's running I did test this earlier and it does fail come on come on gotta feed the children there we go excellent so we can see straight away we're failing every policy that we have because of the code I've written our S3 is publicly accessible subnets are bigger than 24 and and for every instance and for every subnet you get you get that thing right you don't need of course it would be very nice if you use plume because I work for them you can use tools to do this ok you can really use other tools to do this I'm not suggesting that you need to go and spend money and do different things just start to bring these types of practices into what you're building and what you're doing now the last demo I have because I'm seriously running out of time is people have asked me listen I have an existing infrastructure that is in the cloud that is not managed by Terraform or not managed by plume or Ansible or any of these tools what can I do today to start running some level of testing this type of testing after the you know to understand the details of what's going on against these resources so we can do it of course ok now plume plume is an engine just think of it as a CLI runner an engine ok so here we're going to write a file which will be called our index.ts again that one already exists new file which will be specs slash index.ts because again we want an instance of the mock test runner ok which we can have and then we can actually say file which will be bucket.spec.ts and if I say bucket spec but if I delete this because I know it doesn't work right now and then lastly if I say fuzz them for bucket spec then based on exactly what's going on right now I need to import star as AWS from at plume AWS then we can actually set a constant right now which will be a constant of our bucket name ok bucket name will be fuzz them what did I call it go away Siri always so needy like really we called it fuzz them testing bucket let's go back in here and what we can do is we can actually as part of a look up the import is bad the import is bad where it is not yeah it's there go away why you not want to work there we go thank you people that's why live demos are not fun so we have AWS from AWS and we can see it's gone red it's gone green right now inside it we can actually set the constant of the bucket name which we know is a resource that exists this can be a database, it can be a VPC it can be an instance, it can be any piece of information you want and of course because even if you're in Terraform you can do this as well because Terraform is data sources so we're running the command aws.s3.get bucket so get me the details of the bucket from AWS right now and then I can actually write my specs of it should be an east one it should not have a website end point, it should not have public ACL, it should have versioning enabled, it should have logs that are being emptied greater than 45 days and so on and so forth ok so you can start this now, you don't need to be doing this on green field applications or green field infrastructure if you have longer running servers that you manage or longer running networking infrastructure or even DNS that you manage you can be running these style of spec tests to understand that no one is changing anything without you knowing it these can be hooked up to a CI build and everything that's there promise I'm almost done and then we can all go and drink, I mean have a nice tea and talk about things so a well tested infrastructure ensures a number of things ok and this is the key takeaways our confidence in our changes has anybody sort of sat with their finger above the button thinking I might deploy this but I'm not quite sure what it's going to do me 100% me, I've done this a lot of times it's going to introduce less risky infrastructure deployments which is seriously something that we should all be striving for and lastly we can forget this argument of whether you want to deploy on a Friday or not it's one of those things it shouldn't matter if you have the correct CI and testing and tooling around but if you're very very strict on no Fridays then that's ok, no Fridays in summary don't just test after resources have been included the damage may already have been done to your infrastructure to your company, to your reputation so on and so forth for anybody who knows I'm not going to talk about Brexit but there was just before Christmas the Queens New Year's Honours List in the UK caused a big story because somebody actually uploaded the entire spreadsheet of all of the names of the UK honours list including where they lived now as one of the people on there politician and another person was one of the top policemen for security concerns that was a problem so it was too late, it was out there it was in the industry and it was wild we need to ensure that our infrastructure code is fit for purpose without spending money you shouldn't need to be running a terraform up or a plumey up or whatever the command for Ansible or Salt or Puppet or Chef is these days at that point it's too late you're spending money on resources and lastly we need to ensure our security, our infrastructure code doesn't cause us problems within our organisation I have three minutes for questions if anybody has any please but shout very loud question is where do I store this state in terms of my infrastructure or in the demos right here in plumey so is this a general plumey question okay it can be a general plumey question plumey by default will not store state locally it will use the plumey sass which is free for a single user but you can immediately turn around and stop that by using the command plumey oh sorry sorry sorry sorry sorry so you can say plumey login dash dash cloud URL and you can give it your s3 bucket and you will be able to store your infrastructure and you don't have to use it for what you're doing next question anybody great question is there a place on the internet where you can go and grab a bunch of tests that are already created and you can run them in your infrastructure plumey.com now it is there of course but there are lots different tools are starting to feed into like open policy and stuff like that so this part of the industry is going to change a lot in the next 6 months but we do have normal policies next question anybody anybody just want to oh question at the back shout very loud good question are you coming again for config management camp then you will see it there I'm pimping my own talk for gint okay any other last questions the answer is yes you can import infrastructure of course there's no problem oh so the question is is that maybe I promise I'll be out of your way in one minute the question is is that it's very easy to test code that's being created by plumey because it's code how can you test other tools that created infrastructure so if you're in terraform you can look at terragram. Terragram is really good for running these types of tests as well right terra test terra test if you're in terraform okay he said he doesn't recommend it but it exists it exists it exists time for one more question one more somebody else put their hand up anybody else now I've left some plumey stickers on the front please put them on your laptops oh it's okay nobody has them anymore thank you