 We have I don't want for you. So You're ready. Yes. So please formally welcome you to settle Thank you very much. I'm Yuri I work as Q architect for a company called good data and today I would like to share our experience of building test-driven infrastructure So overall agenda is you will try to define a goal that we're going to achieve We shortly cover a test kitchen project, which it is a central part of the story we will deep dive in into Implementation at good data how we bind in test kitchen with docker and service pack and I'll try to demo actual test-driven infrastructure change Creation and some will prop up so it's why it does make sense to us So the goal we strongly believe in good data that infrastructure code should be treated as Any other development code and meaning that we're trying to apply best development practice practices there in for us it means basically applying test-driven development approach for a puppet code and Given this way we try to shoehorn Process of naturally growing regression test suite So test kitchen is a open source project written in ruby originated in chef community It's kind of tester orchestrator singy If you attended previous talk it was Partially about wagon so you can think of test kitchen like a tester rated oriented wagon so it's very extended extensible and flexible and Basically, you can bake completely different test infrastructure using the same tooling It has a fancy slogan. That's how they managed to sell it to me Your infrastructure divisors test to and it's the reason it's mentioned in pretty nice book called test-driven infrastructure with chef So hello over you how how Test kitchen works. It's really simple. It creates VM or container rounds configuration management codes They're and runs any kind of test suite to verify the correctness of this configuration management code under test Oh, I try to reflect in this diagram like Test kitchen has a three main modules each one responsible for different thing So driver is for creation of the instance under test provisioner is for application of the configuration management code in the kitchen terms in the verb converge and a very far to Actually run a test with and all this happening into isolated instance on the test You can configure kitchen in a simple YAML configuration file and these are main points of this configuration So for driver creation, it has a plenty of plugins So anything from Amazon to open stack private cloud and down to docker for provisioner of the same approach It supports chef out of the box But also it has a external modules which support puppet ansible soul stack and many maybe something else They merging from time to time the new ones and verifier supports also out of the box the popular shell testing frameworks and R-spec one of the most popular Framework in Ruby community and extension of it code server spec for info testing Less important models is the transport is the way how to you actually deliver the Testing code into instance and a platform and suit is a way how to semantically organize Your testing scenario so you can overwrite and define some testing constraints here is in these sections so We in good data made a choice and using these specific drivers So we using docker driver puppet provisioner safety P transport and server spec verify so just to give you idea how our Infrastructure test pipeline automated one works in good data. So first of all When developer creates a pull request it goes through a relatively fast test of puppet cattle compilation meaning it compiles catalogs for every type like server all We have described you out in our infra and assuming that everything is fine it promotes the code under test to the next level Which is a test kitchen? and test kitchen operates By provisioning docker containers it puts Puppet under test into docker container test fits there. We also use trick of internally calling its shell mocking it's a way to Azelaic things in container and get rid of external dependencies. I will elaborate on it a bit later and Giving these stages green They could get merged to a relevant branch and it basically goes to actual and environments an important point that is According to best continuous delivery practices the same code is The same code get tested and the same exactly the same code good delivered into the production and we also moving our tests together with our code so instances get also tested in a real environment and Obviously there we have no any kind of mocking So dr. Driver External model which supports provisioning or docker containers as an instance is under test I will provide the configuration chance here and there to get you feeling how it actually easy to configure so in this example we pointing the docker driver to our private docker registry defining proud platform for compatibility Just to give a driver a hint to which packer manager to use and so on and some provisioner commands Which prepares instance to for clean states here. We just cleaning caches of our package manager Next step Converging so puppet provisioner also external model it gets a puppet into instance under tests and the most important part from a test perspective is actually facility to override puppet facts to create Interesting test constraints. So here is an example Of our puppet provisioner configuration We basically define where to get puppet code from a kitchen point and we are installing custom facts It means that we are writing some actually used facts in our code meanwhile We faking out some external things like for example this free IPA one-time password We just faking it out and on this level of overriding Improvisional level it means that these Facts will be a right for every instance under test meanwhile. We have ability to override on per instance level later so transport Original implementation is really underperforming. So luckily there is a already project available called kitchen sink and basically Default SCP we replace it with this with tp and dramatically reduced the upload time But reason for that isn't he's not that the CP is generally worse than SFTP is just a ruby implementation in standards library net SSH He's somehow horribly under-performance for SCP kitchen sink also provides Our sink support but it more complex requires SSH agent separate one. So we made a choice of less complex solution. So here we are just overriding it and that's it I Respect the suite basically service back deserves separate talk. It's a very rich testing framework It's extension of our spec Sorry, I need to Get some water It is extension of our spec and It provides to reach abstraction of awake how you can Test your infrastructure. It's absolutely independent project from kitchen. You can use it standalone and The way how we do it we keep in testing the tests together with actual info code and this way developers Can create consistent to semantically consistent PR. So test lists together with a production code so Several examples of service back to give you a feeling how it looks like it's just chunks randomly picked from official documentation So you can see that you can Test really simple stuff like file exist and it is directory to a more abstract things like Configuration of the Ryan container simple network stuff Current capabilities and I would like to highlight a command resource type Which is actually more flexible one because you can put any kind of common tested for residue out It's for exit status test for a studio and so on and this way you can build really Customized stuff out of that at one point of time You would probably need to also start and create additional custom service back resource type in Ruby But you have a big room until you really need it so How to actually bind service back with kitchen so out of the box support from a test kitchen project is called buzzer is kind of Things that runs supports multiple Testing frameworks so eating will execute them, but it's very simple. It's not it doesn't scale and It's not very extensible. So we went more advanced way here in the link One great guy created like blog post collect good reference implementation of service back and we borrowed code from him and We're developing our Test with implementation top of that and our implementation matured a lot and we will soon open source it But you didn't happen yet So how we actually execute now service back Given that we have a this custom test with executor We interest it into invocation of it is the same way as a Production instances for example in the same way in a docker test container. So we just reuse shell verifier We just tell us you to run shell and do some stuff and there's a test step and We running the service back There are writing Control variables Our implementation is can produce results in J unit. And so we can easily plug it in Jenkins this tech this tech skipping kitchen it if somebody familiar with our spec its standard facility to actually Keep the tests that explicitly mark with some text. So The use case for that that some of the test written in service back in our case are really end-to-end and they work only in more like Real environments and they do not make sense in kitchen Meanwhile, they make perfect sense on an actual environment. So we keep in the still the same test suite and while not producing the results Shell mocking so giving Developing this testing infrastructure. We face the problems of external dependencies and some docker specific stuff So the way to solve it appeared to be really really simple. So we just Mocking the executables that are not working during converge And that are not important for the testing scenario So it's a simpler reverse script It's plugs in during the test instead of package manager and assuming that the mock is defined it will replace with the defined mock and Given that there are no defined mock. It will just pass the Call to a real package manager and everything will work fine. So the format is really simple package Passed to actual being binary or any kind of executable and the mock contents So to give you more clear picture. It's a real example that we Are using so package IPA client IPA client install it in a test scenario We definitely don't want to register container in free API. So we just simply mocking it out and the resulting result in mock You will contain very simple bash script to Just fulfill The configuration files and heat up as the most simple way to do it is like we're doing with 3d Just defined package it pain executable and our tooling will just Populates a whole I'm a fake something and is this way. It's just return exit status zero success And it's fine for pocket and meanwhile it could be just been true, but here. It's just for looking facilities so Also a kernel capabilities Sometimes it's necessary to not really create a mock but For in this to actually use kernel capability current capabilities the way to allow Root access but only partial to make it like a natural so in this provided case we using specific capability Capsis resource and our manifest For some of our manifest that using and proc control limit So this way actually allows container To use this kernel param and it more makes the puppet round green And if you are going to use that I highly recommend to see my capabilities exactly for the kernel under test You're using tests in scenario because these kind of capabilities are heavy David from kernel to kernel version So how to actually define puppet typing kitchen so puppet type in our meaning it's just server role instance role and It's not a kind of puppet resource type. It just Instance role in our code in front. Yeah, I will implementation So we have a discount rule entry point called easy to that the type Each is the name of our role of a type basically So we just have writing on a platform instance level and puppet code. We will use the proper way of execution and Assuming that you're doing it first time. You probably it will fail you make a decision if You need some shell mock at some mock or add some kernel capability and then repeating and when it works in green Your type is ready for testing and you can start happily writing tests So I Will try to actually demo that but given my quiet big experience and failing the live demos. I have a recording for you So is it visible? It's not this one. Okay. Pardon me Just a sec. Yeah, this is recording Yeah, so here with we starting from scratch and making kitchen list like to see the list of our Adopted types. We have a zool gating type Pre-converged for the sake of time constraints first we making verification run running existing tests All is green 82 are fine then we adding new test which is supposed to be failing one Implementing a simple feature basically currently. We are going to just that HTTP D demon for a zool puppet type Yeah, so the DSL is very simple should be enabled should be running. We're running the freshly created tests It should obviously fail and It does it reflects that we have no HTTP D And now we're going to implement the actual info code. We are really going to Yeah, so basically this is already get changes. So you see we just included HTTP D model into actual puppet manifest and meanwhile we have a test very simple example just to have a picture So now we are going to actually deliver our code into instance under test meaning kitchen converge from So it will rerun the puppet within an instance Yeah, so I will run here because it just pop a tron yeah, so Image is converged Instances in a very sorry and now we running verification run again, and it's green now It's not eighty two eighty four tests and a new test path So our code fulfills the testing scenario and if you want to double check since we can always login to testing instance kitchen login type and Investigate with a standard show So that's basically it for demo So again standard red green refactor Thing applied for infrastructure code as you can see So the question I frequently get after showing this is why why I should write the same thing in a Boston puppet and some kind of testing to like service back in this example. So I can Assure you that it's really based off of scale scale Meaning that if you have many of replicas for example web once and you deliver a new change at huge scale good data has Something subtle can happen Like puppet will be reporting All green conversion while service pack is this safety net will Will tell you that something got wrong on a couple of notes. So it's really helpful in this scenario Again next to a huge point is TDD It really makes the flow and allows you to write good testable code in case of puppets. It's more like you have a Ability to avoid the ad-empotency problems and so on and you during developing your manifest and on top of that even that is really useful you can create more More tests that would test an actual outcome, which is way beyond the puppet. So here in this example We testing an actual provided endpoints locally Meanwhile, we have a pretty good smoke tests that will assure that everything is Configured properly Kerberos for a state PD in this case Yeah, it's a service pack plus a bit of Ruby, but really basic one So I don't think that you can be a blocker for somebody So Robo so benefits question. Well remains. So testing from scratch is really important from quality perspective isolation We able to actually Test everything in docker even before it gets to any kind of real machine Easy to test permutations. It means like basically standard like maybe manual Puppet code test would look like you're writing new code You're making puppet no open a real note and you see that chunk is fine You're applying it and everybody happy but a couple days later a fresh note of the same type is getting redeployed and Something good gets wrong. So testing the test kitchen provides your ability to actually Test both scenarios and in very automated way resource efficiency. Yes, under docker selling point But for us we're running private cloud to open stack based and it's really Important for us to not for example deploy 10 full-fledged VMs instead we can Run the same 10 testing scenarios Using one VM in docker containers and this thing will be just junking slave So we are packing thing properly test-driven approach already discussed and Pretty important part that as you notice in edema We actually got feedback even before making a git commit even local one So you can just test things right away. Yeah, and assuming that you amending all your code with the Supporting is a supporting test. You have a good regression test with naturally growing And all these things are really easy to plug into things like Jenkins and create this continuous delivery pipeline so Open source side of things actually all the things I described didn't come for free We were missing a lot of features and we were adding them in a all the projects I mentioned we have at least one patch in and It was a great experience to be like to become a kitchen puppet core contributor small project We're very proud of it So we contribute a lot to upstream So what's next in this story? We as I mentioned resource efficiency. Yes, but still we have a lot of types and we can't tell ourselves to run All regression test suite For all types on every PR. It's just not possible and it's anyway won't get back loop So we're using another source for the project called puppet catalog beef and some automation On top of it, which basically will create catalog puppet catalogs compiled on top of stable head branch Plus and then another bunch of catalogs for from PR Come div them and from that you will get a type of affected least it is by itself useful as a Feedback for developer as in that in this list is a some type appears that developer didn't expect that his code will influence that and This list can be used as input for this kitchen and we'll run only relevant types and it will make things much faster So not on an agnostic approach basically as you notice Driver can be anything apart from docker provisioner can use Plenty of popular configuration management solutions. There's back itself is not bound to any kind of configuration management So you can test Puppet chef whatever with this testing framework or even manual change if you like so it's really A good amount of flexibility here and for us. It means that we really can assuming we have this huge test safety net We can make huge moves like move from puppet somewhere else or Testing major puppet upgrades from version three to version four. So it really pays off. It's a good investment so I would like to thank you for your attention. I tried to Make the last slide somehow useful with the links to a project discuss and I would like to answer any questions anybody yes the The main unit is a puppy is a server role. So we are testing We have a prescribed amount of tests the basing on that role So we are not testing puppet models. We are testing actual infra and this Provisioned roles of servers as a whole and an infrastructure as a whole in the end so this way We can assure that our infra is in a good shape Did I answer a question? Yeah, great. Thank you Anybody else so thank you very much You