 The past president of the Sydney Linux user group, and currently the vice president of Linux Australia. He has a fetish for cooking. You'll notice that nearly all his programs are named after various dishes, nachos, cucumbers, and that sort of thing, so... I'll hand you over, Lindsay Holmwood. I should probably turn this on. Can you all hear me? Through the microphone, I mean? Cool. The level's flying up the back. This microphone's a bit crap. All right. Good morning, everyone. I'm Lindsay Holmwood. I'm a senior engineer at Bulletproof Networks. We do posting and that sort of thing. If you're looking for a job, come and talk to me after this. So, I very much appreciate you all taking the track all the way down here to the other end of the university in L101. But this is true. I'm being ostracised by the conference organisers. So, today I'm going to be talking about behaviour-driven infrastructure and sort of the implications of infrastructure as code. So, we sort of want to take a step back. You probably may have heard of the term behaviour-driven infrastructure through another thing called DevOps. So, I'm assuming that most people here have heard of DevOps in one way or another? Awesome. Cool. So, this is sort of a technical side of DevOps without so much the project management side of things. But this sort of factors into it a tiny bit. And I'm assuming that most people have probably heard or developers in the crowd definitely would have heard about behaviour-driven development. The ops people in here do sort of have an understanding of what behaviour-driven development is. You can say no. That's cool because that's what the first part of the talk is about. Okay. So, if we've got to understand what behaviour-driven development is to sort of understand how behaviour-driven infrastructure comes along and how that sort of works. So, the question here is what is a behavioural test? Well, to understand what a behavioural test is, we actually need to take a step back and understand what sort of the origins of modern automated software testing. So, I'm not going too far back. I'm just going back to test-driven development. So, test-driven development really sort of started the renaissance of automated testing in the software industry. And a very, very simple workflow. You know, not difficult to grasp at all. So, you're writing a test and you're running that test. And you're finding that that test fails because the system doesn't do what it's meant to do yet. And then you make the test pass by filling in the blanks or maybe mocking and stubbing and whatever. And then you're refactoring the code behind the scenes to make that work. And then you just write more tests and you're basically in this endless cycle of joy. And the approach with test-driven development is very, at least at the beginning, is very unit test-focused. So, you're basically testing the input and output of functions or methods of that sort of thing. You're just looking at individual units with the system. And you're not really looking at the flow of data or the interactions of users within the system. So, really simple things like this. You know, you're passing the food bar and you get a result and then you're doing some assertion on that. You're checking that the result matches some particular parameter. And the common testing toolkit there is ex-unit. So, behavior-driven development is very much a reaction to the focus of those tests. So, we're more testing the flow of data in the system and the way that someone's actually interacting with that. And really simple things, you know, very high-level things like, you know, you're basically testing that a user can perform a particular task. You don't really care about inputs and outputs. You just care that you can accomplish a particular goal. So, and these are really high-level things as well. Like, I just want to be able to make a donation or search for a product or check out my card or, you know, do any of those standard user-focused level things. And really, the behavior-driven development movement was sort of born out of a bit of a business need here where there was a disconnect between what was actually being tested in the software and what the business was actually seeing the software was delivering. So, it really is about verifying a business' functional requirements are met. And because of that, the business doesn't actually care too much about the underlying implementation of the way that the system is built, right? It's actually quite irrelevant to the business. They just care that they're able to perform a particular task so they can make money. And you're never, ever, ever going to see a business doing something like this, right? You know, it's just never going to happen. So, what they're really asking is, can I search for things, right? Or can I do any of those things that I was talking about before? You know, those high-level user interactions. So, the function in sort of the real-world sense is very, very important. Not in the programming sense at all. But the implementation per se really doesn't matter all that much. So, if we're verifying that a business' functional requirements are met, then what are the tools for doing that? Well, the common ones here are integration tests and acceptance tests. And they can be manual or automatic. It doesn't really matter too much. But these have commonly been grouped these days into a thing called outside-in testing. So, it's sort of like a grouping term for both of those different types of testing. And generally, there's been a movement in recent years towards writing executable specifications. So, expressing these requirements and the scope of a project in a format that business can understand in high-level language, in spoken language, but can also be applied to, you know, they can be just handed to the programmers and go, okay, go and write this and implement this. And it's really about, it's really business-oriented because it's about understanding the business, understanding what it's actually paying for. So, enough of this. Boring. Let's move on to the interesting technical stuff. So, this is about behavior-driven infrastructure, right? So, what is behavior-driven infrastructure? Well, it's really based upon this idea of infrastructure as code. So, Andrew Schaefer, one of the founders of Puppet Labs, had this really interesting post a few years ago. And this is a fantastic quote from him. And it's really, what he's basically saying is that we're treating infrastructure where we're looking at it through a bit of an abstraction. So, we're not looking at it as individual components that are sort of performing something. We're actually just looking at it as a single entity that you interact with. So, if we take that analogy, the infrastructure is the application, right? So, let's think about what all the different sub-components of that infrastructure are. So, the daemons are essentially the libraries that are performing particular functions. And configuration management is actually the programming language, is the glue that's pulling all these different bits together to be able to build the application. So, therefore, the infrastructure is pretty much built with code, right? If we take that to the next step, code without tests is pretty bad. In fact, I would go so far as to say that code without tests is evil. So, behavior-driven, well, and why is that evil? Well, really, it's because you can't easily verify that the system does what you expect it to do. It comes back to those business requirements that I was talking about before. So, behavior-driven infrastructure, then, is really about taking these behavior-driven development principles and tools to a certain extent and adapting them to infrastructure development. So, we're basically, I suppose, cargo-culting a lot of the infrastructure and processes that they've been doing in the development world and applying that to what we're doing operationally. So, what are the tools? What are the tools for being able to do this? So, the obvious one that's very common to me is Cucumber. So, Cucumber is an executable format for writing software specifications. And it's also a tool for executing those specifications, using the same high-level spoken language that we were talking about before here. And the tool basically takes that and interprets that and executes code behind the scenes. So, some terminology so we can sort of understand how Cucumber fits into this. You've got a feature. And a feature is a module of common functionality. In general, you have one feature per file. And this is what a feature looks like. Can you guys up the back see that okay? The contrast is a bit flaky. You can't? All right. My apologies. Better? Okay. This is a really simple outline of a feature here. We've just got the name of the feature. And then we've just got this preamble. The preamble is purely... It's not actually executed at all. It's just a high-level definition of what this feature is actually trying to achieve. So, a feature itself has many scenarios. So, a scenario is a way that a user is actually interacting with that feature. So, here we're saying that the Google search, there's a home page, right? And the home page is essentially a feature of Google. We're saying that when I go to google.com.au and I search for great balls of fire, then I should see great balls of fire in the search output. And a scenario itself has many different steps. So, going back to what we're seeing here, we've got these when, and then. So, the steps that the Cucumber comes bonded with, given when, then, and there's also an and keyword. So, a given is really about setting up a set of conditions for the test to be able to execute. A when is performing an interaction with that system. And then the then is, you know, checking the output of that and checking that it matches whatever your expectations are. And the and is just a tiny bit of syntactical sugar within the Cucumber file that basically executes the same keyword above. So, if I've got a when and then I do on one line and then I do an and on the next, we'll just treat that as a when. So, in this way, we can actually think of steps themselves as being sort of like a series of unit tests. And these steps actually map to blocks of code. So, this is where the where the rubber hits the road pretty much. So, we've got these scenarios here. And each of those steps there actually maps to a block of code. So, Cucumber internally uses regex matches. So, it means that you can reuse these steps across multiple scenarios and multiple features. So, generally, when you generate a Cucumber feature, sorry, when you run your scenario, Cucumber will go, oh, okay, you don't have steps for this. So, it prints out a bunch of code that you can just copy and paste into a file. And there'll be a pending keyword there. And then you go and fill them out. So, behind the scenes here, we're using another little library called WebRat. And WebRat basically handles some basic HTTP interactions. So, we're saying here that we're visiting a particular location. So, the location here is actually being extracted from this group here, this regex. It's using a back reference. Being assigned to this block, to this variable within the block. And then we're using it down here. And the same for all these other things here. You know, we've got two other grouping matches. So, the last step here is using another thing called RSpec, which is doing these should matches. So, instead of the assertions that we were looking at before, it's basically syntactical sugar around assertions. So, the blocks of code themselves are like unit tests. Therefore, a scenario is a serial execution of those unit tests. That's sort of essentially what an integration test is, right? You're interacting with different bits of the system at one point at a time. And then you're stringing the results of all that together to check that, you know, if I do A and then I do B and then I do C, then I see D. So, how do we go about using this? Well, installing is pretty simple and straightforward. So, you do a cucumber's room in Ruby, and a sister room in Ruby Jam. And then you go, again, install Ruby Jam if you're on Debian or YAM install or whatever the package manager in your choice is. And then you just go, gem install cucumber, and that'll install it onto your local system. So, using it, very, very simple. There's a standard repository, sorry, a standard structure that Cucumber expects to be able to find things in. So, generally, you contain all your Cucumber features in a directory called Features, and then you have steps in a different directory called Steps. So, you know, in this particular example here, I'm just doing very bare bones. I'm doing all the manual work myself, and I can all go into another little project to accelerate a lot of that development in just a moment. So, the testing workflow then becomes very, in fact, it was pretty much identical to what we were talking about before, where we're writing a test, running that test, test fails, make the test pass, we refactor, right. So, the example scenario that, oh, sorry, the feature and scenario that I was talking about before here, we can use exactly that. So, we edit this file, this Features and Site.feature, and then we'd run that, and then we'd have some steps as well. So, this is a little bit abstract. Let's sort of go through and actually play around with this. Alright, I'll invert that. Can I still read all that? Cool. Alright, so, back in the example there, I was sort of driving it, driving it all manually and creating the structure. So, I actually, I run another, I've made this other little project with Cucumber Nagios, and it basically provides a Cucumber output that you can plug straight into Nagios, but for the example here, it's actually really great for just getting up and running with Cucumber very easily. So, if you just go, Gem install Cucumber Nagios, which I've already done. And now, if I go Cucumber Nagios Gem project, and I'm just going to call it LCA 2011. Okay, great. So, if we go into LCA 2011, you can see here that there's a, sorry, shitty BSD. So, it's already initialized to get repository for us, and it's got a bunch of features and steps that I was talking about before, and it's got a couple of generators in here as well, Cucumber Nagios command itself that wraps Cucumber to perform the Nagios output. So, once we're in here, we can actually run Cucumber Nagios Gem feature. Oops, fat fingers today. And we're going to call it Website. Oh, sorry, that's wrong. I'm going to go LCA 2011, and then Website. So, if I go into Features, Website.feature, it generates all the scaffolding code for us here. So, really simple, it just should be up. So, when I visit LCA 2011, well, that's obviously wrong, .linux.org.au, then the request should succeed. Okay, so, now if I go Cucumber Features, LCA 2011, and Website, and I run that. Okay, so Cucumber here is going, well, I don't know about what these steps actually are, what you're asking me to do. So, please give me some code to be able to do that. So, there's actually an option that we just go dash dash require features, and then I get a wonderful breakage. Oh, right, awesome, sorry, I missed a step there. I was ignoring my own install instructions. So, if you go Bundle Install to pull in a bunch of these dependencies. So, behind the scenes, actually, if I... Cucumber Nagios has a whole bunch of dependencies that are required to make stuff work. So, things like Aspect, the testing framework, WebWrite for doing HTTP interactions, NSSH, MTP, that sort of thing. So, if I just run that, that's going to take a minute. Why don't we... So, actually, if we can just look at Cucumber and Nagios for a second while we're waiting. So, Cucumber and Nagios is just a very simple wrapper around Cucumber itself that outputs the stuff in the Nagios plug-in format. So, there are a bunch of options that you can pass to it, and we're just basically saying here that you want to use the Nagios Outputter for Cucumber. You can write your own custom formato. There's a bunch of stuff that comes Bundle with Cucumber as well. So, there's a standard Pretty one, which gives you that nice output that we were seeing before. Back in the slides, there's a JUnit Outputter. Yeah, a whole bunch of different things out there. Almost done. Is there anything else in here? Oh, yeah, okay. And the other thing that's probably worth looking at is m.rb. So, Cucumber, internally, will run this before it tries to execute any of the features that you pass it. And basically, you can load in your own... You can require your own libraries and load in your own code and that sort of thing up here. So, it's sort of like a global require that gets executed. Okay, so let me try that again. This is fantastic considering it worked a second ago. Sigh. This is awesome. That's the exact... Sorry about this. This was working perfectly about five minutes ago and now it's exploited in my face. Okay, let me try... Okay, I think I know how to fix this. There's a perfect opportunity to rant about how absolutely crap Bundler is. But I'm not going to. All right, we'll come back to that in a minute because that's going to take another five minutes or so. Okay, so now that we're able to write these high-level tests for the way that you interact with your infrastructure, it would be... It's sort of interesting to talk about what the implications of those are when we're treating the infrastructure as code itself. So... I might just take this out of that. There we go. So, one of the interesting things when I was talking about sort of cargo-culting what the development community is doing, we can use a whole bunch of their existing tools like continuous integration for testing the stuff that we're building. The prime example of that is testing the server build. So you have a whole bunch of high-level tests for the way that you expect a particular server to operate or a series of servers to operate. And you... And you would run those tests on every time that you do a check-in to your continuous... Sorry, your configuration management system. So you could use something like, you know, Hudson or Jenkins or whatever to create a whole bunch of clean room VMs, apply those puppet manifests or whatever, and run those tests against them. That's an interesting way of spotting regressions and testing the behavior and the environment is still the same when you're adding new features. But the question then becomes, well, what environments are we running these tests against? So, you know, do we run these against view-out or staging or having to bid production? Or are we doing clean room environments every time? You know, one of the interesting implications there is perhaps when you're running the tests in clean room VMs, but maybe at the same time you want to have a whole bunch of older VMs that have had the configuration previously run against it and say that if you run that same configuration against it it behaves in exactly the same way. And what about destructive tests? You know, in the development world the common way that you deal with this is with fixtures. So fixtures are essentially test data that you feed into the system before you're running any of these tests to set up a bunch of states so that, you know, you have users to log in with and there might be orders in the system or something like that. And that's using a very simple set up and tear down pattern. But how do you apply that to an infrastructure setting where, you know, you don't want to do set up and tear down the data on a live production system, right? You know, it's just a... it has all sorts of really nasty implications. So one of the ways that maybe you could do that is with A-B testing where you're segregating part of your infrastructure that you just run these tests against and it doesn't really... You know, you write a... you have a bunch of fail safes behind the scenes that are able to catch the data, the test data that's being written. But then again, you know, you've... it's quite an interesting problem because now that you've implemented all this extra stuff on top of it, you're not actually testing what's really happening in production. So I don't have any clean cut answers to that. I'd be really interested to have a discussion about that. Can you roll, play the roll, whatever. One of the other interesting implications is the migration to a configuration management system from doing all your stuff by hand. So rolling your own servers by hand. So imagine that you've got, you know, an old legacy environment that was set up a while ago that some, you know, some person that's since left the company set up and there's no real documentation but it's business critical. I'm sure you've all seen hundreds of servers like this in your time in the corporate world. So how do you take that and, you know, you obviously want to use configuration management to reduce the risk of that server basically exploding and then, you know, business being without that critical system. So behavior-driven infrastructure and using behavior-driven development principles is quite interesting here because what you can do is you can write a whole bunch of tests that model the behavior and test the behavior of the existing system make sure that they all pass all that sort of thing then take exactly the same tests and do them and run them in a new environment, right, that you're managing with configuration management. So again, you know, that's where the implementation doesn't actually matter all that much. You just care, you care about the behavior you're just testing the behavior behind the scenes. And back on the continuous integration front as well there's an interesting crossover here in the ops world that we have with monitoring systems. So if you look at what a continuous integration system itself is actually doing, it's very simple. It's just this life cycle of, in fact, it's more of a pipeline of building a piece of software, taking that built artifact and deploying it onto a system somewhere and maybe setting up a bit of test data, then running a series of tests against that and then you're notifying some audio, maybe you're not notifying when that test either pass or fails. So the interesting thing here is that a monitoring system is basically doing exactly the same thing except we don't care about building or deploying the software. Right, so we're sort of already doing this in some ways but we sort of just collapse these four steps down into two because the other stuff has already been done for us. And that leads to an interesting question about the way the monitoring is currently being done, which in my opinion I sort of think that we've been asking a lot of the wrong questions. You know, if you look at the sort of questions, the standard sort of monitoring checks that you've got, they're a ping to server or they're doing a TCP connect and seeing that you get a response. You might do some sort of HTTP interaction but even then it's just, you know, you hit a page and you check that there's a string available on that page. And those sort of checks are basically asking these two types of questions. You're basically saying, you know, is this host up or is the service available? And the problem there is that it doesn't take into account all these different corner cases, like what happens if the network stack of the machine is broken? But, you know, the checks are still passing. Sorry, the network stack is up but the rest of the system is broken. That test doesn't really capture that particular corner case, that failure mode. And what about the service availability? Well, you know, that doesn't really check the stuff where someone's misconfigured a service and it's, you know, you might still have that page. You might be checking for like the copyright notice on the bottom of the page. But if you've got a whole bunch of, you know, database and PHP errors above it, then that check will be worthless. And what about bugs triggered by user data as well? A lot of the testing that's done in these systems is just done with clean room data. Stuff that developers think that there is data that's going to be created at some point. But it doesn't take into consideration things like, you know, people trying out cross-site scripting attacks and that sort of thing. And that obviously leads to the inevitable what happens if the system has been hacked. You might not really know about that. So, I'm guessing most of the operations people in the room here are going to go, well, why should I care about this really? Because, you know, the examples that you gave here are very, very simple. Quite often, you know, I'm using my own tests in my environment that are doing much more complex things here. So, the thing, the reason that you should really care, you know, in 9GOS checks already do that. In fact, I've heard anecdotally that Google is doing very similar sort of things internally with a whole bunch of smaller checks that aren't themselves doing any sort of notification. But you've got another master check on top that's sort of taking aggregate of all of those. So, the thing here is that Qtumber really provides a framework for phrasing these questions. These key words that I was talking about, the given, the when, the then, right, it's all about modeling these interactions. And it lowers the barrier of entry really to writing these good checks, well, better checks, I suppose. The only caveat here is that you need to have a firm grasp of the language, right? If you don't have exceptionally good, well, not even exceptionally good, but if you don't have reasonable communication skills, it's very easy to be ambiguous when you're writing these Qtumber scenarios. This is a fantastic example here. Perfectly valid Qtumber. You know, those blocks of code there are going to map to something. So, your system could be working, right? But if somebody looks at that, they can't conclude what it's doing. Not only is it completely valid, it's also completely useless. So, you've got to be really careful about how you phrase a lot of this stuff. And I've been writing Qtumber for going on three years now and it's still a trap that I sometimes fall into with ambiguity. Sure, sorry. So, the interesting thing here though is that Qtumber is essentially providing a common specification, in the format that both developers and operators can share. So, you can have a lot more cross-team communication between the people who are writing the code and the people that are managing the code of production. But more importantly, it's actually something that the IT department as a whole and the business itself can share. So, the business is a lot more informed about what we're actually doing and what we're actually testing. And the cool thing here is that it's sort of actually removing a whole bunch of duplication of tests in the development environment. You've got a whole bunch of automated tests, but then when the software gets packaged up and thrown over the wall to the operations team, they go off and they go, well, what the hell am I monitoring? They'll go off and they'll write their own checks and maybe it's checking for the same thing or maybe it isn't. There's a lack of information transfer there between the two teams. So, when you've got all these tests that are being written in development and these other tests that are also being written like, oh, I hate these projectors. There we go. You can really think of cucumber features as being a liberalized form of tests, right? You're running the same tests, but the implementation is exactly the same specification. You're describing the way the system should work and someone interacts with the system in exactly the same way. But the implementation behind the scenes is quite significant and different. That lets you take into account things like operational constraints. You don't want to pound the system too hard when you're running these tests, especially for intensive tests as well. You might want to be tested. Say you're running a video uploading service. You might want to test what happens when I upload this particular video. If you're uploading hundreds of megs of data in a request, then that's obviously going to have an operational impact on the way that the infrastructure is performing, right? There's an interesting thing here in cucumber where I'll go back to in fact, I might go back to right now. Does that really say this is of shambles, isn't it? Do I have any... Hey, I do. Okay. Linux.old.au and home. That's presenting for you. If we just go and have a look at this actually here, I'll try and illustrate. So there's a concept of tagging on a scenario where you can basically go, okay, well, I'm going to write this tag of, say, production and then I might have another five tests that are all doing the same sort of thing, maybe, but this one you only want to run in staging, for example. This one you won't even run at all. So when you run cucumber against these, there's a dash-tags option. So you can go dash-tags production. Sorry, app production. And it will only run the scenarios that are tagged with production. So this is an interesting way that you can share a lot of the specification. It's more that the development team is very, very aware of what is operationally important to the business, right? Because they're there, they're writing the software. They know what the business cares about. And if you really take it down to it, there's probably only, in most businesses, two or three core functions that you want to be working all the time so you can make money. So what the development team can do is they can take those scenarios that test that vitally important mission-critical thing, the function of the business and they can tag them with production and then hand them over to the operations team and say, okay, well, these are the tests that I need to actually implement and only run in production. So that's an interesting way of increasing the communication and the collaboration between the two teams there. So I've sort of explained where we were coming from and sort of where we're going to but where do we really... What's the next step from here? Well, the interesting thing about this as code and behaviour of an infrastructure is that it means that everyone's going to start writing a bit more code, operations in particular, but it means that we end up writing more code to automate more things and obviously if we're going to be writing more code because of things like the cloud and whatnot where we just have a lot more systems, a lot easier to manage them programmatically than log into a thousand servers individually and run some particular task, if we're writing more code then if we're going to treat this as a discipline software engineering practice then we're also going to need to be able to write more tests and stuff like that. So I don't think in the future it's really going to be okay for operations folk not to be able to understand how to write even just a small amount of code. And the interesting thing about writing code is that eventually patterns are going to fall out of the stuff that we're doing. Common architectures for deploying applications and building infrastructure. So a really obvious example of that is the three tiered web architecture that was sort of pioneered in the late 90s and very much popularized by the likes of Facebook and Twitter where you've got a bunch of application servers sitting up the top and you've got some caching servers and then a database server underneath it. That's a very common pattern. That's how most high availability how most high availability websites are being built these days. So that's an interesting infrastructure pattern. So Steve Gin from VMware as well has been writing an interesting series of blog posts over the last three or so months where he's describing common patterns for provisioning virtual machines and using virtualized infrastructure. So you're starting to see these patterns emerging from the way that we're doing stuff operationally. And we're very much at the beginning of the renaissance I suppose where basically the operations community and the infrastructure as code idea is very much sort of what back where software development was in the early 90s really. So interesting patterns are definitely going to start emerging. That's going to be really, really exciting. The obvious thing here is expanding the library of these common tests and these common interactions so that we sort of have a de facto stand that we can point to to test these different types of systems and reuse them. One example of the Cucumber staff so if we look at features steps Cucumber Nadeos itself comes bundled with a whole bunch of different steps for doing these common interactions. So a really simple one is command steps. So this is a nice contribution from ops code, the guys behind Chef and really simple things like when I run a particular command then something should exist or some, I should see some particular output on the command line. These are really common interactions that you're doing all the time. So Cucumber Nadeos is sort of becoming a library of these common operation steps but it would be really interesting to see more contributions and more of these systems sort of modeled. A couple of other really simple examples like the HTTP steps are a really simple example and my keyboard decides to work. Yeah, that's actually a really good idea. By the end of next session, John, if you really want it to work. Yeah, there's some interesting stuff that came out in the Cucumber community the end of last year where it's basically a web app for looking at these different types of scenarios sorry, these different steps and you can just browse through it basically and find the step that you want and then sort of click through and you get the snippet of code and you just copy pass through it into your application. So yeah, sort of having a forge of these common infrastructure interactions would be really cool. And the obvious thing is just explaining to everyone once you're doing it and you're getting a good feel for it, explaining to people the importance of doing this because there isn't really, I suppose, a culture of software craftsmanship within the operations community. We're sort of very insular, we're very focused on this is the way that it's always been and it's the way that it's always going to be and we don't want any of that crazy just behavior driven development stuff. So, unfortunately I would love to live in a world that we didn't have to change, but change is inevitable obviously so as we're moving more towards infrastructure as code then we need to be able to talk to people about the importance of these concepts and applying software engineering principles to the development of infrastructure. So I'd like to open up a bit of a discussion here if people want to ask questions and I can bang away at getting a QCOM and RGOS working behind the scenes at the same time. Yes. Yes, for whatever seems appropriate I'm just trying to understand difference, at least in terms of monitoring between what we're doing and what we're doing. So yeah, that's actually a really, really good question. So how does behavior driven testing infrastructure fit into... You got it working? How does that fit into traditional monitoring environments? So I see the sort of test that you're talking about which is basically statistic collection and then just checking particular values to be quite separate to this behavior driven infrastructure stuff. I see them working in conjunction with one another. But the low level tests that you're talking about here are more for I suppose triggering warnings about stuff that might happen or is about to happen. So it's stuff that your website can be performing. You could be executing all these behavior driven tests against your website and it's all working, but it could be running slow as a dog. So a lot of... There'd be a response time the code measures response time. I see the behavior driven test as being a layer on top of the lower level test that you're talking about here. You're talking about very low level statistic collection, statistical analysis, that sort of thing. The... Yes, that's exactly right. I probably wasn't clear enough there. So the comment here was I'm adding to the tests. I'm not replacing them. These low levels, statistic collection and value comparison sort of things absolutely valid, very, very important. This is absolutely not a replacement for it. It's changing the perspective of how we're actually developing these systems and I suppose providing a more business friendly way of testing these interactions and testing that they're actually receiving the value that they're paying for. Yes. The problem with any sort of test code obviously is going to be the coverage of the tests and when a problem does occur digging down and finding the underlying cause. Now, given that you already have the configuration is pretty much a specification of the entire service, like if you scan through the web server config and then looking at the logs it gives you the behavior. What scope do you see for being able to basically automate a lot of the test creation through analyzing the config and the behavior from logs? Yeah, actually that's a really good question. So that's something that I haven't quite gotten to thinking about but my initial reaction to that is well right now these cucumber steps actually aren't very well parameterized. When you parameterize them you're actually you've actually got to write some stuff into the scenarios themselves. So one of the things about becoming Nardios compatible actually is that you've got to be able to provide a bunch of these random arguments on the command line and be able to inject them into these scenarios. So right now technically cucumber Nardios isn't 100% Nardios compatible in that way. I suppose what you're talking about there is when I automate some infrastructure to deploy a web server or an application server or whatever it is how do you make sure that those tests match up with that? So you're sort of talking about constructing the tests on the fly based on how the individual components interact with one another? We can't hear you on the AV system. We'll finish this later. Yes? You talked about it at the start obviously using almost the description as a software specification. Yes. And I see that as you've sort of come from the bottom up from testing low level unit things right up to trying to describe hence behavioral for that truly to take on what extra work are you doing to enable someone who is going to define a software specification to test your infrastructure and make sure that what they the business is paid for or thinks they're paid for or they're actually getting? That's a good question. So actually can I just add something wrong to this question really quick? Sure. From what you've described is from where I said a ton of extra work for a ton of extra people. So the developers now have to write not only their test for their own code but also how their code is going to be deployed. And the sys admins are going to have to take those cucumber features and write the code to fill in the rec apps and then and then on top of that they're already maintaining their monitoring systems and they're already maintaining their puppet systems or their chef systems or whatever. We know perfectly well that the business people are not going to be writing any of those cucumber features. So now we have this additional layer where all these people are going to be doing all this extra work and so maybe you can talk about why we would go through all that extra work and what are we getting for all that? I don't see it as extra work per se and I have to disagree with what you were saying there about how the business people aren't writing these cucumber scenarios. I mean that's exactly where these specifications are coming from by talking to the business and finding out exactly what their requirements are and what the specification is. I disagree. In fact it's like a standard BA cycle for evaluation when you're building a system right. You need to talk to the business people and quite often cucumber is an easy enough tool for them to be able to understand. So I don't see it as being a separate thing. I see that the business people are helping the software developers write the specifications. Specification is executable. They're not writing any extra different tests. It's just changing the way slightly that they're doing it. And then they can take exactly the same specification and hand it over to the operations team and you know this half of the work is already done for you by writing the features in the first place by writing the specification. So I don't actually see it as much as I suppose it's just that we have a slightly different world view about where we're coming from from this. I'm happy to talk about it afterwards. Thank you very much David. On behalf of the Linux conference here except the small gift it's a macadamia nut bowl made from crushed shells and same as the one that Vinceref got the other day. Thank you very much.