 So it looks like about seven people are interested in talking about test driven infrastructure with chef I don't know whether that's because I have the luck of the draw And I am post lunch where everybody these is either still eating or they're maybe asleep because they've eaten too much So those of you who are standing at the back stand up for me people at the back you guys at the back stand up stand up come forward Come forward come a sec closer because otherwise I'll have to shout a really long way Power is important power is important Alrighty, okay, so test driven infrastructure with chef. Ah, this is the wrong remote Identical okay, so Let me start off by asking you a really dumb question How many of you test your production code Okay, those of you who have not put your hand up. Is that because you don't have production code? So the question is why do you bother testing your production code for those of you who couldn't see the people sticking their hands up Most people test their production code. I'm reassured. So well, why do you bother? Why do you do this? Well, most likely it's because you want to catch regressions Maybe you want to make sure that you meet customer requirements Maybe you want to build confidence in the code that you're writing Maybe you want to make sure that the quality you're building is good and it's built in and keep these days out Maybe you just want to keep things stable Seems pretty obvious that you would like to test your infrastructure code as well Still got the wrong thing They are absolutely identical, right Put that there and I won't pick it up again So let me ask you a next question How many of you test your infrastructure code? That's one and a half people Okay, so this is very interesting and when I ask that question, this is the kind of result I usually get Yeah Um, it would have been better if you didn't ask me that question when my boss was sitting right by me So so the question I want to ask is well, why not? Why don't you test your infrastructure code when you do test your production code? And I'll leave you having to think about that while we go for a little bit of a primer of test-driven development Okay, so Fundamentally software engineering is actually a form of learning the majority of people who are working in the software project Have to pick up new tools from time to time. They are working with new organizations. They're solving completely new problems The customers themselves are sometimes being exposed to problems that they didn't know they had They're sometimes being forced to put into code or put into some kind of form or something Something which has been on the basis of informal agreement or maybe they just didn't know how that works So software engineering is fundamentally about learning. How can we make that learning easy if we can make that learning effective? We'll be effective with software engineers So it's important to understand that software engineers are not the same as civil engineers When you build a bridge or even a building, okay, there's some variation about the size and the scale But pretty much bridges are bridges, right buildings are buildings. You're not actually coming across any radical new technology Sure, there are variations, but you're not often going into completely uncharted territory Most software developers when they start out on a project the chances are they have never done the thing that they're being asked to do before So you're going into completely unknown territory So the only thing that you can be absolutely certain of is that you're going to get unexpected changes Something's going to happen. The one thing you can be certain of is uncertainty So we want to encourage effective learning and the way we get effective learning is by ensuring that we have empirical feedback So not just feedback, but feedback that we can measure feedback, which is based on real data feedback Which is based on something that we can touch that we can understand that we can work with So I'm going to suggest a few things that you could do to ensure that you get empirical feedback First thing you could do is you could deploy often Well, that would make sense if you deploy often you're going to be able to test your assumptions You're going to verify whether or not the progress you're making is good progress You're going to be able to understand whether or not there are any problems All the best code in the world and all the best tests in the world There's no substitute for putting the code out there and getting in front of real people. So deploy often Now you want to demonstrate regularly You want to have a cadence where the stuff that you're working on you're demonstrating even if it's just within your team Or to stakeholders, but you want to demonstrate frequently Why because every time you demonstrate you get feedback and the more feedback you get the more learning you're doing The more learning you're doing the less risk you're carrying the less risk you're carrying the better for the project So you also want to be testing constantly You always want to be probing at the system making sure that you understand what's happening The things go wrong and make sure you understand why they went wrong to test constantly all the time And you want to make sure that the code you're writing is optimized for reading So especially for junior developers of people who first come up to the project They may well spend the first six months or a year mostly reading code They may not even get to write the code for really some significant amount of time And if your code looks like this Then I don't really want to be the person who has to read it So we need to make sure that what we're working on is readable and understandable Fundamentally all of these feedback loops need to be kept short And the way that we go about this is making sure that we have testing right the way through our systems Now the problem with testing is well testing is really boring Testings that stuff that you do kind of at the end when you've finished the project and somebody says what about test coverage and you go Yeah, it's a bit like what about my infrastructure code about that So no testing is really really boring right you just go Do I have to and the reason for that is you're doing the testing after you wrote the code? Well, I would argue and it has been argued very vociferously in the extreme programming community that you should write your tests first And the reason for this is because it guarantees you a number of great wins And so the basic Process that we go through when we develop our code in a test-driven way is red green refactor So we start by writing a failing test Think about the thing that we're going to do we write the test the test fails Then we make the test pass and now we have the ability to refactor and we can refactor now Because we have confidence that if the test breaks it means we change something we can go back and fix it We have the safety net beneath us. So this cycle is absolutely fundamental to the idea of test-driven development And here's the golden rule I'm going to say this a number of times The golden rule is we never write new functionality without first writing the failing test So what do we get for this? Well, we get so much when okay, let me go through that So firstly when you write your tests first You clarify your acceptance criteria So you know what it is that you're doing you understand what it is that you're delivering You've got a handle on the domain You've got a handle on the stuff that you're trying to do and the problems you're trying to solve So when you write your tests first, you've got that first and foremost in your mind And you know when you're done because when the tests pass you're done So it also encourages loose coupling as a really great Idea about loose coupling because I think well loose coupling that sounds kind of Sounds important. Maybe I should know about that What is loose coupling? Well, this is why I've got this rather handsome Mark Levinson separate system here I don't know if any of you have ever bought When you were a teenager when I was a teenager I saved up all my money because there was a stereo in the high-five store and it had 17 different modes of flashy lights and All sorts of displays and you could tweak everything and it was in a it was a box and it had two tapes and it had a Turn table on the top and a bunch of other things, but it was it was a thing like a big thing And one day I kind of thought you know what it kind of looks cool And it's got a load of flashy Tony things, but you know what the sound kind of sucks So maybe what I could do is I could get a better CD player So I looked into getting a better CD player and then I looked at my big shiny black box thing I realized well I can't plug the CD player into the big shiny black box thing because the big shiny black box thing is just a thing It's made of various components, but I can't get at them. I can't pull them out and swap them out I can't alter them or change them or do anything with them So basically if I want to do something a little bit better to improve the compact disc player I just need to throw away the big black box and start buying things again And that's an example of sort of things which are not composable and things often the case when we do Test writing I'll come to an example of this shortly. So really I test first encourages loose coupling No big black boxes The other thing is it provides executable documentation. So one of the things in in the agile manifesto is the People told our job people don't like documentation bullshit. Who told you that? I mean seriously no We don't like writing vast quantities of manuals that we're gonna hand over in a big box and say there you go There's documentation. No, but we do want documentation, but we don't want documentation that ages We want documentation which is live and grows and changes So executable documentation and of course it grows your regression suite because every time you write tests So every time a fault comes you write a test to catch the fault your question suite goes And this is very important If you're writing your tests first and you run the test and then something strange happens Or you write your code and then you run the tests again and then something breaks well rather than Handing it over to a testing team somewhere else who maybe you haven't been working on the features that you've shipped three months ago Now you're talking about code that you wrote just now. So the context is right there fresh in your mind So you're there freshly having Introduced a problem into the system the test has caught it and now you've got fully loaded context You're gonna be able to fix it more quickly. So I finally you avoid gold plating I'll come to this again shortly But gold plating is that thing where you think well, you know, it would be pretty awesome if I did this as well Hey, well, why were at it? Maybe you could make it do this Hey, what about a spinny wheel everybody needs a spinny wheel, right? I mean all this extra things that you start ladling on and the reason you do that is because you didn't write tests You didn't write tests up front was defined when you were done. So you just keep on adding functionality in so avoid gold plating so Fundamentally then when you're doing test-driven development, you're not just testing the implementation You're not just testing does the thing work You're also testing whether or not the code is well structured and so we'll go through this with a brief kind of high level How do we go about writing a unit test? So in order to write a unit test We need to make sure the thing that we're testing can operate in isolation like the CD player Okay, can we get it over here and work on it in isolation? Then we need to instantiate the object and we need to work out what the dependencies are and provide them Then we interact with it and then we verify that the object behaves as it expiated. Wow Actually know what expiation is given. I have a degree in theology, but the object is not Dying for our sins the object His behaving has explained expiation. Wow never expected to talk about that so, um, so the the object which died for us Was was behaving as it was intended and if this doesn't happen Then it's because there's something wrong with the design and this is called listening to the tests So if one of those things is wrong It's probably because it's badly coupled or there are unclear dependencies or the responsibility for the object is unknown So the test is hard to write the design is probably wrong And this brings on to the whole thing. It's perfectly possible to write really really great software That is utterly pointless. So this is a superb book a great object oriented software by Steve Freeman that price These guys are part of the London extreme programming community and Yeah superb book There is a gist which I will publish later Which has a bunch of reading in there that you can look at I recommend this book So they say we've seen projects with high quality well unit tested code The turned out not to be called from anywhere or that could not be integrated with the system and had to be rewritten So this brings me to the useless crap diagram. This is fantastic. So how many of you have ever worked in an environment where you built some software really really really well But you gave it to the customer and the customer said yeah, that's great But it's not really what we wanted. Does that ever happen to you? Yeah, that's happened to some of you. Okay What about have you ever been in a situation where you've been working on some stuff where you've built some software? And it's absolutely mission critical and does everything the client wants, but it's horrible You dare not touch it. Anybody ever been in that situation before? Yeah. Absolutely. Okay How many of you been in the situation where you've got truly terrible code really badly written which is also of no use to the client Yeah, yeah, I've done that too. Okay, that's the useless crap So what we're actually looking for is the success So in order to build successful software We need to be building the right thing and building the thing right and we do that by writing our tests And the antidote to this is to actually write the acceptance tests first rather than unit tests first And so our circle now looks like this. We start off by writing a failing acceptance test So the acceptance test is going outside in describing the way we want the system to behave The test fails obviously because we've not written any code now. We write a unit test that also fails Then we make that test pass we refactor and then we go back round the circle again So this is the standard BDD outside in thing. Okay, so there ends my primer on test driven development Now, how does this apply to infrastructure code? This is the interesting question. So At this stage, I'm going to go for a shameless plug. I wrote that book and If you would like to get that book you can get 50% off if you use the code orth D Or 40% off if you buy the e-book. So there you go So in this book I look at Brian Marek's testing quadrant. Yes On the O'Reilly site. Yes on the O'Reilly site So the testing quadrant then there's various kinds of testing that we could do if we're thinking about systems that we built So let's just clarify. What do I mean by infrastructure code? I'm talking about production code the code which was responsible for building the infrastructure upon which we deploy our systems So it's actually pretty damn important. Okay, so this code What kind of tests could we conceivably do? Well, we could do kind of usability tests and we could do exploratory tests This is the kind of stuff that QA engineers are great at the great Kind of how can I do this? Can I what happens if I fill the Entry form with a load of nines. Does it does it break something? What about if I go backwards and forwards 12 times? That's okay 13th time it breaks that kind of exploratory stuff You can't automate that stuff That's a special kind of skill. You need a special kind of minds to be good at exploratory and usability testing So then you've got low tests and penetration tests. Well, those are great And you probably don't run them all the time though You probably don't automate them you can automate some of them You can automate your performance testing and your load testing, but you don't necessarily run them all the time in an automated way But guess you could But then that leads these two over here your acceptance tests and your unit tests your integration tests And so this this diagram is talking about well, what is the function of these tests? Are these tests designed for the business? Are they for the people who are looking from the outside? Or are they for us the engineers who are writing the code? And what's the purpose does it support the development? Does it support the process of building the software or does it help us understand whether or not we're meeting those requirements? And so the ones towards the left Are the ones that we as engineers are most frequently engaged in but we can see we need business-facing acceptance tests and technology-facing unit and integration tests Okay, so how does this work within test-driven infrastructure? Well, it's actually just the same but just a little bit more complicated So we start off by writing some acceptance tests So these acceptance tests are I'm building some infrastructure in the case of chef You're building some chef cookbooks. Well, why are you doing that? What problem are you trying to solve? Who's your end customer? There's a couple of ways of thinking about this now in the chef community There's the concept of a wrapper cookbook So the concept of a wrapper cookbook is something which you would use to deliver a piece of functionality say a website or An internal application or it actually solves a specific purpose And that wrapper cookbook will then pull in library code from elsewhere in order to deliver the functionality You need so you would write some acceptance tests for your cookbook And these would be things like when I go to the website Does the website have a login page and when I go to the login page can I log in and use the site? So actually it looks really quite a lot like the kind of acceptance tests that you might have written already a software developers Or if you're building something a little bit lower down Maybe you're building a continuous integration server or maybe you're setting up a react cluster or whatever Then you might like to make some acceptance tests which basically say Given that this infrastructure has been built can I use it in the way it's designed? So then we run the acceptance tests or those acceptance tests going to pass. Well first time. No, they're not So then what we do is you write integration tests So the traditional idea of an integration test is tests working against code that we don't control Now it's not quite the same in the world of chef, but it's kind of similar Integration tests is what happens when you mix the various bits and pieces you have various cookbooks Some of the code that you didn't use some of the code that you did use Maybe having to talk to external sites. So it's still only integration test So we're gonna write these integration tests and then we're gonna run them Now if they pass Well, that's great. Probably a miracle though because we haven't written any code yet If they didn't pass now we write the unit tests. So the unit tests are down at the level We're writing chef cookbooks and recipes. We're actually talking about is well What resources are we using so are you looking at files? Are we looking at packages or we're looking at services and we're testing down at that level and when once the unit test Pass we will back out the integration tests if they pass we will back out the acceptance tests So it's still all the nested loops and feedback thing but Are the slightly more involved level? So if you want to do that there are a bunch of testing tools that you could use So chef spec is the one which is for unit testing Chef specs are very powerful and very capable tool Test kitchen is the one which I'm going to talk about most today This is a framework which allows you to do all manner of integration testing service spec is a Component which we use with test kitchen which allows us to use Ruby's artifacts syntax to test infrastructure Cucumber would be used to your acceptance tests and line. It says a piece of software I wrote which is designed to allow you to spin up infrastructure Externally so that you can test it now I actually tweeted about this last night because I was having a look at some of the open source projects that I've worked on and I am terrible terrible terrible at what I kind of I guess I write ok software But I then I kind of forget about it and pull requests appear And I don't look at them and issues build up and I look at them And so I have worked on this tool for a really some time But actually it still works and it's really capable and really useful But there's loads of things that could be added to it could be made even more powerful So I'm standing here in front of you guys and anybody who sees this on the video and says today I turn over a new leaf and I'm going to try to be way more responsive on pull requests and way more responsive on issues and I was thinking about this yesterday. I was as I was in the shower I was thinking you know when you write open source software you're entering into a contract That's a funny. It's an unwritten contract But you're entering into a contract because maybe I wrote something to scratch my own itch And I stuck it out there because I believe in it and saw software great But as soon as I stick it out there and somebody starts using it There's an unwritten contract which says I wrote the software. So I kind of need to maintain it I kind of need to be responsive when people say it's broken Now if you're not prepared to do that, that's fine stick something in the weave me that says this is purely for my fun If you want to use it go ahead and use it, but I'm an asshole and I don't respond to pull requests And I don't respond to issues. So you're on your own, buddy now If you haven't written that in your read me, but you don't respond to issues and pull requests You're an asshole. That means I've been an asshole. So I'm sorry So focusing in then we're gonna look at test kitchen and service page. I'm just gonna explain why that is so We are looking at two different sorts of things. We already saw this on the Brian Marek diagram We're looking at external quality and internal quality external qualities pretty easy to judge So does the thing work? Is it responsive? Is it stable? Is it performance? Internal quality is can I maintain it? Is it readable? Are there a lot of bugs that's the internal quality and there's a there's an inverse relationship here So when you start with unit tests, they're really really useful If what your objective is is to measure and understand internal quality But they're not that useful if you want to work on external quality and inversely your acceptance tests They're great verifying the external quality of your systems, but they're not that great for helping you understand the code itself They help you understand whether or not you get the domain, but they're not so useful when it comes to actually understanding the quality of the code And so really the sweet spot is in the middle here at this intersection, which I'm calling integration And my actual Understanding for my experience in this domain is that we have a curve that looks a bit like this This is just a sketch. It's not it's not based on any real data It's just it's just a gut feeling the gut feeling is that chef's spec is super easy to use and you get you know Fair amount of value for it. That's super super easy a Test kitchen gets you way more value and it's not that much more difficult to use And then you've got livenets up here, which is where you have to start writing Acceptance tests in pure Rebe and then orchestrate the spilling up of the machines Well, that's super super super valuable, but it's also super super super time-consuming and difficult So for this reason if we were to integrate this curve and measure the value you got from this It's definitely worth spending our time on test kitchen So for this reason, we're going to talk about test kitchen also known as kitchen CI So what is kitchen CI? So kitchen CI is a pluggable framework. It allows you to Harness and build your tests in a way, which is unique to you You can choose what it is that you want to write your tests in and you can use it to test pretty much anything It gives you an interface to the entire software development life cycle from spitting up machines Converging nodes running tests Verifying that the tests did what they were supposed to do and destroying them again And the reason it's called kitchen CI is because the whole idea is that you'd be able to take this framework and Use it as an entry point in your continuous delivery or continuous integration system So you can write your infrastructure code and then plug it through kitchen CI and out will pop. Yay. All is good or boo not so good. So Kitchen CI is based around a simple life cycle So we create systems. We're building. We're writing infrastructure code Those infrastructure codes need to go on a Linux box somewhere So we need to create those machines Then we need to converge the nodes and chef speak This means we take our desired state which we've written in our chef recipes and cookbooks And we apply them to our machines to make the world the way we want the world to be And then there's a setup phase the setup phase is kind of a bit of magic in the background Which is responsible for installing whatever you need in order to run the tests And then there's a verify step where you run some tests after the machine is finished converging And then you destroy the system and the test step does all of that at one And the way I like to think about this is if I were to say to somebody Okay, you wanted that some high availability my sequel setup Okay, I've done it It's ready for you to use And they might say okay. I'm just gonna check it out. See if it's okay And if it's fine, yeah, we'll go with it if you were that person. Where would you go? And what would you do? Well, you'd log on to the box. You'd do some commands You'd look to see where the port 3306 was running You'd look to see whether the service was running you maybe try and make an external connection You might try and create and drop a database. That's the kind of thing you would do So what test kitchen does is it is that person it does that stuff for you after the machine has been built And so you can write your tests and whatever you like So the common one the one I use is service spec You can write raw aspect. You can use mini test. You can use some shell best shell based programming To run tests. You could use cucumber if you're a Python person You could you know, hell you could use BBC basic for all I care. It doesn't matter. The idea is that Test kitchen gets out of your way. It makes it easy for you to write tests So in order to get started you need three things You need the chef development kit Which is super easy to install package which gives you everything that you need in order to start writing Chef cookbooks and testing them But then because we're going to be building some machines and testing them you're going to need vagrants and you're going to need virtual box as well Okay, so how much do we install these tools? Well, we want to write some test first code We're going to do this using a package here a package there Well, actually do we really want to be messing around installing this manually? We'll know I guess we're going to use chef to install these tools, right? Okay, but we've got ourselves a bit of a problem. We've got a fractal problem here Because what we want to do is write a cookbook test first That builds a platform so that we can write cookbooks tests first that builds a platform so that we can write So where do we start? Okay, well Fortunately, I already had one that I made earlier But what I'm going to show you now is how to write That infrastructure from scratch in a test first way, so I actually did this this morning I went through from scratch And I took screenshots as if I had started from nowhere And I've made this cookbook available so that should you wish to go ahead and play with test kitchen And learn about test of an infrastructure. You can do so So you start off by using the chef CLI the chef CLI is what you get from the chef development kit And what you get in that development kit is a bunch of things food critic, which is a linting tool Test kitchen. We're talking about chef spec is the unit testing the chef CLI. We've just talked about Knife is a is a tool for interacting with the chef a server in a number of ways Chef clients is the thing that you run when you want to actually converge node and then there's bookshelf Which is the dependency solver Okay, so what we're going to do is we're going to use the generate command of the chef CLI To create a wrapper cookbook So when you run this generator all you're doing is you're creating this cookbook And it creates a bunch of stuff for you and it drops off all the things that you need in order to get started So once we've created that what we end up with is a file that looks like this So it's in the kitchen directory. It's got a Burks file. It's got an ignore file. It's got the metadata It's got some documentation and has a default recipe. That is it. It doesn't do very much So then in order to actually start working with this Test kitchen is driven by a file called.kitchen.yaml and has a few basic ideas So the first thing it has is the idea of a driver So a driver is basically how are you going to provision the machines that you're going to use for your testing? So in this case, we're going to use vagrant, but you could use whatever you like you could use EC2 you could use digital ocean you could use VMware you could use whatever you like In this case, we're using vagrant. So then the provisioner is well when when you're going to run chef How are you going to do it? Are you going to use the chef server? You're going to use chef solo you're going to use what are you going to do by default? We're going to use chef zero chef zero is awesome If any of you guys are using chef solo stop just use chef zero. It's better. It's it's like an in-memory Chef server that just runs in real time super fast super easy. So you chef zero So then we're talking about the platforms Well, what kind of systems are we interested in working on and this is where it becomes really valuable for infrastructure developers As infrastructure developers if we're writing a cookbook There is a library code which is going to be shared by other people like an Apache cookbook or a engine X cookbook You want to be sure that it will work on every flavor of Avanti every favor of Santos every favor of Sousa free BSD open BSD windows, whatever you like AI X salaris HP UX all the things that chef supports you want to make sure that it's tested Now you try doing that manually It's going to take a really long time and we start building up the matrix and talking about hours and hours and hours of manual testing So the idea behind this file is that you can you can specify the platforms that you want to test on in this case I'm picking a bun to 1404 and central 6.5 and then these are the test suites so it's called a default test suite and What it's going to test is the results of applying the default recipe from the kitchen run list Okay, and there were no attributes. All right So when we run the kitchen test what it does is it goes and gets some favorite boxes It creates some instances Buntu and Santos once it converges the node without a run list It'll skip the setup because there are no tests yet. It'll skip the verify status Then it will destroy the instances and it likes it was zero. So let's run that test. Okay. What happens? Well, it goes through takes a little while in this case. It takes three minutes 54 seconds It's because it has to go and get all those files download them, etc. Be quicker the next time around And we can see it exited zero Okay, so now what we need to do is we need to write a test Well, this is the default. In fact, this is a conventional test lab. You must follow this layout So the convention of test kitchen is this you need a directory called test Integration those must be there. This corresponds to the name of your suite. So our suite is called default So we have a directory called defaults This level corresponds to the tool that you're using the testing We're using service spec. So we'll stick it there And then this is pertinent to service spec or service spec is host oriented. We're just going to run on local host And then with this is the test we're going to run and this is a spec helper, which just makes everything possible So let's create that spec helper. This is just boilerplate that you need to have So we're going to ensure that we've got service spec and path name We're going to include the stuff we've got and then we're just going to set up and make sure that we're using the right Operating system and commands from service spec code Literally, you just boilerplate that in so then we're going to write a test. So this is just our spec Just our spec So what we're going to do is we're going to say describe the kitchen cookbook default recipe Alright, I will when I describe the default kitchen cookbook recipe. I'm going to say that it should create a kitchen user And then we're going to use the standard expect syntax. We're going to expect whoops We're going to expect use a kitchen to exist We're going to expect it to have the home directory of home kitchen and have a login shell of bin bash Okay, great So then we're going to go and create our test instances by running kitchen create That'll create those instances now We can keep them around for a while so that we don't have to keep firing them up again and again and again But you need to get into the habit of destroying them starting them again Just to make sure that you're not accidentally getting any side effects so Once we've done that we can run kitchen list and we'll see that we now have a machine This is called an instance So it's made up of the suite that you want and then the platform that you want You can have multiple suites and multiple platforms so you can have as many as you like in this case. We have two Okay, now we're going to run our test. So when we run the test, it's going to run kitchen verify That's going to install chef It's going to run kitchen setup which installs the service spec testing tools and anything else necessary to make it work It'll copy our tests up to the instance. It will run those tests and then it will report back So what we're going to do then is we're going to run the tests and watch the test fail Well, obviously it should create a kitchen user, but it didn't that's because we haven't written any code yet Okay, so let's write the test to make it pass. So this is an example of a simple chef resource We're specifying that it's a user resource. It has the name kitchen We're going to specify that it supports manage home That basically means we would like it to create the home user when we build it I would like it to have the shell of builder bin bash Fine, so now we're going to reconverge the Ubuntu node. We'll do it by running kitchen converge Ubuntu This will run chef it will apply the recipe that we just wrote and when we do so we see that it created the user kitchen So now we can run the test again and this time the test passes The kitchen cookbook default recipe should create default use the kitchen user and it did So now let's run the test on the kitchen on the central snow that also passes cool Okay, so Now what we're going to do is we're going to ensure that you have Ruby in your pup The reason I had to do this is because on this box. I had not installed Ruby Now one of the things that you get with the chef development kit is this command called chef shell in it Well, it basically does if you type chef shell in it and then the type of shell that you have it says Well, if you just copy and paste this Then you can get the Ruby and the gems which are part of the chef development kit available to you in your shell right now So that's what I did. I run chef shell in it bash I copied and pasted it and now I've got Ruby Ruby is available my thing and the reason we knew that is because the next thing we're going to do is we're going to deploy We've got a passing test. We've got some functionality. Yeah, let's deploy So I'm using the chef server in this instance, but you could use chef zero So what you do then is you run Burks install. So Burks shelf is the dependency solver So in this case, we have no dependencies because we've just written one recipe But later on we'll have more dependencies because we'll be depending on upstream cookbooks Which in turn depends on other cookbooks and we need to solve those dependencies So we've resolved the dependencies and then we put them all available just in the same way that bundler does It puts them all in one place that we can use them and then we're going to upload them to our chef server So they've gone up to my chef server and I can use them So now I'm going to bootstrap a machine. So in this case I ran the digital lotion knife plug-in and I launched the machine and It installed chef and it ran chef on my machine and I logged on to the machine and I logged in and I tried to connect the kitchen user And it said no directory Logging in with home equals root Well, that's a bit odd so It's going on here Okay, remember the golden rule we never write new functionality without writing a failing test So clearly we have missing functionality. We did something wrong. You could call it a bug or you could call it missing functionality But on a bunty machine, we don't have a directory despite the fact that we asked chef to do so. What is going on here? Well, so what we need to do though is we need to write a failing test So this test says that it should have a home directory of home kitchen and it should be owned by kitchen So let's run this against sent us Test passes cool. So on the sent us machine. We do have a home directory. No problem. What about on a bunty? Okay, we don't have a home directory. Well, right. Well, how are we going to fix that then? Well, turns out that we just need to do directly home kitchen Owner kitchen and that's done. Now. I think this might actually be a bug But it's always been known that red hat is opinionated about the way that it creates users So when you create a user with red hat, you will get a default shell and it will create your user directory for you That is non-standard from a Unix perspective, but red had standards. So that's why sent us does it A bunty does not do that just create the user doesn't give you a shell Just give you bin sh and it doesn't give you a home directory So now we've created it and now we converge the node And it creates our directory and we run the test Oh, it's interesting one is on sent us idempotence happens So nothing happens didn't need to do anything the directory was already there now the tests pass great So what do we do? We bump the version we release and we deploy So the version is controlled in a file called metadata derabi So the metadata says what version it is and we're going to bump it by a patch version We're going to observe send verb semantic versioning So we're going to bump it by a patch version because we fixed a bug basically And now we're going to do books install and perks upload now We've uploaded it to our chef server and we could carry on and we could deploy our system again Now remember we never write new functionality without writing a failing test So let's add another test and watch it fail This time we think well, we really need virtual box and vagrant don't we So we could do a bunch of more complicated tests, but in this case all we're going to do is we're going to say well When we run the box managed version It should match this regular expression which basically it says the way you virtual box prints it out It just tells you what version it is version version version our version likewise when we do vagrant same thing so Now we write the code to make it pass and this case all we need to do is include library code from upstream so we could include virtual box and vagrant and then Because the vagrant upstream cookbook is a bit poor or rather a bit old I Forked it and I improved it and my poor request has not gone in yet So we're using just in the same way you would with a jump on we're using a vagrant cookbook and saying Get it from there. So then we converge the node book shall solve dependencies and now the test spots great So we bumped the release this time. It's not point two point zero because we added a feature Okay, and we add the dependencies in the metadata. That's important because that's how bookshelf works It looks in the metadata and says what dependencies do I need and then goes through and does that personally You're interested in how semantic versioning works for chef. Have a look at this URL there So finally then we need the chef development kit and we need get Well, that's straightforward. You're just going to do the same kind of test. We're going to write the test This time we run it the test will fail Well adding get is super simple. We just do package get And in the chef development kit is a little bit less straightforward because we need to work out What are we going to call the file? Where do we get it from? I'll run through it very quickly. So what we're doing is we're talking about the platform family So whenever chef runs something in the background called oh, hi runs this profiles your system And it tells you what kind of a system it is makes that data available to your runtime So we're then saying well when it's a Debian type of machine So an Ubuntu machine or a mint machine or whatever Then the package name is going to be we're going to calculate it And then we're going to get it using the remote file resource And then we're going to send a message to the deep package resource and install it And we're going to do the same with red hat only the name is slightly different That's all we need to do Now this relies on the idea of a cookbook attribute, which is some default settings that we would use So in this case, we're saying that the base URL is ops code on the bus packages in s3 and the version We want to install is 0 to 1 1 you can always override these at any stage in your system So then we go ahead and we converge the node And the test pass yay now we have a kitchen user a home directory virtual box vagrant the chef development kit and get I Call that success pretty awesome So my conclusion comes from Michael Fedders fantastic book working effectively with legacy code So straightforwardly to be blunt code without tests is just bad code Okay, it doesn't matter how well written it is It doesn't matter how pretty or object oriented or well encapsulated it is With tests we can change the behavior of our code quickly and verifiably without them We really don't know if our code is getting better or getting worse And that applies to infrastructure code as much as it applies to production software so what next Well, one of the things that you probably need to do is speed up the tests Because tests that take too long to run end up not being run as ways you can do that There's a docker driver and there's an LXC driver to test kitchen which weights things way way way quicker So if I'd had time I would have demonstrated that to you because it's really awesome Next thing of course you want to do is you want to automate all the things So you want to get this running you want your cookbooks to be running may be plugged into Travis So that all your tests run and you can always see somebody submits a PR or somebody submits some change which fails linting You want to know about it right away If you're a Ruby developer you might be familiar with the idea of guard So you can have something running which spots whenever you've made a change and then it will run the tests for you Right away so you get feedback faster and faster and faster and then of course you want to plug it directly into Jenkins and that way you have an end-to-end Tested system built on the foundations of test-driven developments which have been proven to be effective Throughout the agile and extreme programming movement in the last 15 years applied directly to chef code So thank you very much