 Llein i everything, my name is Neil I live right here in Edinburgh I came for university and never left so I love it that much here and I hope people who have travelled to Edinburgh have had a chance to look around I work just up the street at a start up and have been doing coding for many years and one thing that seems to be a constant is python is a tool that you can pick up and get stuff done with. So I've got, I've used python in all sorts of areas. I even have it running in my house where we have high up windows that open and a raspberry pi is controlled by Alexa to open close these windows. OK, is that better? OK, I'll lean forward a little bit. So a problem came up earlier this year. We have a system, quite a complex application and we needed to write system tests that really talk to the public API. We don't want to do anything that kind of pokes in the sides or anything. We want to write tests that talk to the published interfaces. And I looked around and I naturally picked up python. That's my go-to tool to get things done and found really quite a good set of things that came together and we can write very powerful system tests in very clear, concise code and I'm going to show you how that works. So I will give a little description of the system or testing because it needs some context and then we're going to jump into some coding and see the testing at work. So hopefully this slide doesn't frighten anyone. We have a client and service application and I would hope that when you're building something like this, you want to push the code down into the server side, particularly when you've got multiple clients like web and mobile so you don't want to repeat any logic. And if you analyse what happens in that server side, quite often what you see is there are low level operations that carry out the things your application does and there is another facet which is this authorisation or logic that governs what can you do and when can you do it. And one thing we're seeing is some developer, some businesses want to separate these two aspects in a service. That the authorisation and the logic of what can happen when sometimes is a different development life cycle, maybe even different owners to the low level right stuff for the database and all those types of operations. So we're seeing a separation of the service into two layers. This was exacerbated in a customer where as a legacy bank and they have their big old IBM mainframe at the back and they've implemented all these channels that talk to the banking system and yes, the low level operations cannot be delegated into that legacy system but the authorisation logic is repeated in each channel. So that's definitely a problem for big older organisations that are maybe being challenged by the monsters of the world to be able to move more quickly. So we take our client server, we add a couple of new components. I think in the middle enforcement point, it sits in between the client and the service and when the client makes some request, it is going to turn that request into what we call a decision request. And it'll make the decision whether this request can even be carried out before the service is invoked. And then the service just becomes a sort of set of the low level operations, pretty dumb. We move the actual evaluation that they're working out or of the answer to that decision request into a separate component. And obviously if you have multiple of those channels, that component can be shared. So that's the reason for doing that. So this decision point answers questions like can client X do operation Y with resource Z? And it says yes or no, permit or deny is the terminology we use. And it can go out to other services that are available. Say the request was in a bank for a customer to make a payment. The decision may actually go and check the customer's bank balance to make sure they have enough money to make the payment. And the last part of this is there's an authoring system. You can think of those two boxes in the bottom left as a as a software development system for authorization logic. And that's what we're going to focus on. We want to write system tests for a system where we can write rules for authorization where and then make requests for authorization that will evaluate those rules and possibly go out to an external service to answer those questions. And one of the things about this system is that the external services that it talks to are really defined by the rules are not intrinsic to the system. And that's a particular challenge for testing it. Right, so enough of the slides. We've got an application. I'm not going to use the real application because it's far too big and complex. I've made a simple version of it that kind of captures the architecture. So it's a spring boot application written in Kotlin. Now is your time to boo if you don't like my technology choice. And it is all obviously given the talk. It's dockerized so we can look, simple docker files. Who uses docker by the way? Anyone? Oh, quite a lot. So I don't need to explain what docker does. That's good. We have a docker compose, which brings up my two parts. So my simple version, my offering point is called the compiler. It's not your standard compiler. It's a web service where you post code to it and it returns the data blob that you stick on the engine that evaluates the rules, but it captures the architecture. So I can go in here. That the font on the terminal and visual studio codes a bit weird that chops the tops off. So if anyone doesn't like it, give me a shout and I'll try and do it in another terminal, but it's nice to have it all in one window. So I can bring the app up. We should see a couple of spring boot applications up. They come very slow, not like Python. So here's possibly the simplest rule set you could give it is for any request always permit. I can send that to the compiler and it comes back with some meaningless Jason. I can bring that back down and it's gone. So docker is great. Let's just clean it up. I can even run it, of course, this background it so that I can do other things. So the first challenge I'm going to have in testing the system is I want my test to bring the app up. So I could in Python, of course, use sub process or talk to the shell shell out and run these Docker compose commands and bring the app up and down. But anyone's ever written code that does that sort of thing. You know, it's not the most robust. You don't want to put it on your CI server and then have to fix it three times a week when it falls over. So enter the Docker SDK for Python. This is a brilliant library. Really easy to use. That gives you pretty much all the power of Docker compose, but with a nice Python interface. So we'll go back to our code. I've got a little example. Let's get rid of the app. Don't want to see that again. So using the Docker library, I just need this one initialization line. I've made a class. Which is a Python context manager, which is great because however you use it, except if there's a normal exit or exceptional exit, we definitely will close down the Docker container. So that's good for robustness. And you can see the enter and exit are really, really simple. We just want to container run the thing we're running and kill and remove the container afterwards. And then we just use it in this with block. So it's great inside the block. The Docker container is running and after the block the Docker container is gone. So we can go in here, just run that. And there you go, up comes a application. You can see the Python code there is just streaming the standard out from the app and dumping it to console. I can do that and it's gone. There's just the two that I ran in the background earlier. So that's good. That's going to be how we bring the app up and down for testing it. So the next thing, we were in here and we sent this request. So we're going to need some HTTP client to talk to the library, talk to the application, sorry. And if you're doing HTTP in Python, request is your obvious answer. So we're going to use that. And then you need some sort of testing. Framework, testing infrastructure and PyTest is my favorite testing infrastructure. So I'm going to use that. So what does a test look like? Let's go and have a look at one there in here. That's a test that brings up a Docker container, talks to it over a rest and verifies the result. That's pretty cool. We can run it. If you go to the right directory, it's called test compiler. It takes a little while to bring that up, bring it down. Great. So now I meant to explain why PyTest is my favorite testing framework. And I'm going to show you, create a new file. Sometimes when you've written tests in the past, there's a whole bunch of ceremony around subclassing based classes and writing set-ups and tear-downs, but a PyTest looks like this. Oh, does anyone use PyTest? Yeah, loads of them. So you all know about PyTest. Good. I like this one. Let's save that in really good. I didn't have to write enough lot of code that wasn't related to the test. Like calling my failing on this on that again. Let's do the minus V because then you get a little bit more. And so you get quite a nice output there showing exactly what went wrong in the test. So this is really good. The other thing I really like about PyTest are fixtures. And we can write simple code like this, call it foo. Just return some data. And I can write a test and I name the function that I just created, decorated with fixture as a parameter in the test. Okay. That's useful. Obviously that particular case is not the most useful, but you can do quite a lot with passing things into your test that way. We're going to elaborate on it in a minute. Another thing you can do. I can write another fixture that depends on an existing fixture. Great. Okay. So that's the fundamentals, but there's more you can do it with it than that. Let's make a new module. I'm going to write one which depends on a built-in fixture in PyTest, which is a temporary directory for every test that uses this fixture. I need to put my decorator on. And what I'm going to do is make some path to a file. Let's save this. I get some syntax highlighting. There we are. There we go. We're going to depend on a whole file and I can, what I'm going to do, open read. Let's run that one. I got it wrong. I called it path there. There we go. So the fixture created a file. I can actually go and look at that file. And the default place is in here. It was the most recent one, nine, 10, 11. There's my hello file. So that's quite interesting. I can write files, but I can do another more interesting thing, which is to yield instead of returning there. And what does that mean? It still works, but of course yield means I can write more code after this point. And the test still works, but we just have a quick look in the terms actually around 13. Now the file is gone. So the file existed when the test ran, but it's gone afterwards. So you can see these fixtures are really good way to set up some kind of resource and tear it down after the test. Where could I possibly be going with this? I'm going to use a fixture that runs a Docker container. So rather than write it all out because it's getting a bit slow to do that. Let's go in here in my test framework. So my Docker class here is a little bit more complex than the one I showed earlier, but it's really just about capturing the logs, dumping them out, giving the diagnostics you need to debug a failing test and also things like networking and other things you might need. So my test compiler I showed you earlier depends on this fixture compiler and that's defined here. So I'm just using this function which uses with my Docker container and then I yield the Docker container or rather the REST client for it once it's ready. And I can write tests that when the test runs, we know that Docker containers up and that we have an API client that can talk to it and then when the test finishes, it's gone. So what else have we got here? We've got this REST client, which is really just a wrapper around a request session makes the sort of get some posts a little bit more test friendly. We can automatically fail on HTTP errors. And that allows me to also in the in the fixtures to subclass that REST client and add sort of high level functions that talk to my API. So I'm not writing tests that deal with, you know, posts to REST endpoints and that sort of thing. I can do it at the sort of level of specification. So that's what we saw there. Okay, let's do a more interesting example because that doesn't do very much. Here's one that depends on the compiler and the engine. So I'm going to compile some rules, same rule, always permit. And then I'm going to load that into the engine and I'm going to send it a query and I'm going to expect a result. We can do that manually. Yeah, that was it there. So we get that compiled blob of Jason. We can load it into our engine app and then we can send it an empty request. It's a Jason object, the request, but doesn't depend on there's nothing needed in it. And we get a permit. Right. That's what you should do. Slightly more interesting example in my rules language that I'm writing here. I can write variables and this says variable my value depends on some key in the request object called my value. And there's two rules. When it's foo, I permit. When it's bar, I'm going to deny. So I can compile that. We get a big blob of Jason, which I can then load. Where are my query? If you send it a foo, it permits. Send it a bar, it denies. So we can send it other things. And it's undecided because there's no rule for that. So there's those three things as well as errors that the engine can return. So we're going to automate that test. Where are we? That one wasn't great. So here we are. And we're going to use a couple more features. I didn't mention the module scoped fixtures. Module scoped fixture lets you create a fixture and then all the tests in a module that depend on that fixture will run within essentially in that yield block. So I don't have to bring up and down the docker container, which is pretty slow. I can bring up once and run all these tests. So there's the same code. I'm going to compile it in the offering point. I'm going to load it into the engine. And I've written a fixture here to factor out the common parts of the three tests I want to write. So the fixture depends on those things defined globally to all the tests. And it's going to set up the engine and return it. And then I can just write all these tests engine with rules and I can just send it queries and expect responses. Let's run it a little bit of a delay on the first one as it brings up those docker containers. Then you can see they all run very quickly and a delay to shut it down. That's pretty cool. If we go back here, the last example, I can also get my value from some external system. So I can do a get request to this URL. And when that's a foo I'm going to permit, when that's bar I'm going to deny. So I can compile that. We can load it, paste it in there. And if I query, oh, the system doesn't, the thing it's talking to doesn't exist. So what can we use to build a little mock web service? Anyone use Flask? Or is Flask? Really like Flask because you can write whole web apps in this, but also for very small amounts of code, you can write little test services and mock services. So let's go and do one. This live code of web service on stage. Let's call it Europe Python. Use decorators again. We say app.root. Let's use the my value one. I'm going to try to function. Place. Overwrite that one. Did I get that right? Looks okay. So, but now we can go to, maybe a little web service running. Was that six lines of code? Pretty cool. And we go back here. We can now let's try a test, testing it. Localhost 5000. Yeah, returns foo. So now my service, which a little bit I've maybe lost over, the URL it's talking to is his external. Because remember this app is running in a Docker container. If we go back and look at the Docker compose, for the engine, I added an extra host line in there, which basically says, if you talk to that host name inside the Docker container through the magic of Docker networking, it will talk to the host machine outside the container. So I can now send that. I'm not sorry, that's the compile. Let's query it. We'll get a permit. And if we change our service, we get it tonight. Great. But how can we put a little mock web service like that right in a test? So you could make another Docker container, wrap up all the, with the Python environment and bring that up. That would work. But as I said at the beginning, the service that we're talking to really depends on the rules that we're writing to test. So you don't really want to break these two things apart. You want to define a service in a test and implement it right there in the test. And that is exactly what we're going to do. I'm going to write a fixture, which is a service that's going to create a flask app. I'm going to wrap it in the service class, which we'll look at in a minute. The test depends obviously again on the compiler, the engine, and now on the service. And when I build that URL that I put in the manual testing client, I could just write external. But that's kind of depending on the way that the test framework sets up networking. So much better to provide a method that, given a path, will give you that full URL. So they will go, going to compile some rules, load them in the engine. And then the service again is only brought up inside a context manager. And that's handy because if you write a test that fails, anything that was brought up in that way automatically gets taken down again. Ideally, you want to... This was all created to write tests that go in our CI system and run nightly. And we don't want to have to fix it every morning. So let's stop that. A long pause while Docker comes up. There we are. We passed. So we write a test that defines a service. We can define the service right there in the test code. And you get this nice, really succinct test description. So how does this work? This is maybe the most complex part of the thing. When you wrap... This is the service class. When you wrap a flask app in the service, it adds an extra root on it. Service control with get and delete methods. When you get, it just has a response. When you delete, it uses a little bit of flask magic. It finds this function from the environment that shuts down the server. There's a URL implementation. It's not that interesting. It just builds up from the things it knows. And running the flask server is really a matter of kicking off a background thread and doing flask app.run like that. We start the thread. And then a bit like the Docker container, we have to wait till it comes up. So we just pull on that service control until it responds okay. When we want to stop it, we send it at the delete. And off it goes. And then we wait for the thread to exit. So maybe the most complex part, but it's not that complex really. There's a little bit of extra... The instrumented cluster up at the top here is a WSGI wrapper that records all the invocations and the responses that that mock service received, which allows us to do things like verify that the engine we're testing definitely invoked that service once and not 100 times. And that's kind of it. We end up with this. I don't know if I can make that fit. We end up with this really, really powerful testing system. You remember all the things this is doing in 30 lines of code. We are bringing up two Docker containers. We're running a web service in a background thread. We're configuring it all to talk to each other and running a test against it. So we've been building tests in this way for the last few months. And we're quite a big library of them now. And it's looking really good. It's really reliable. And it runs on our CI system every night. So we'll be a little bit early, but any questions? I think a nice talk. Quick one. Will the code be on GitHub? The code is on GitHub. I'm sure you've heard me in my last slide actually. We'll write through to... There we are. There's the URL. You can get me on Twitter too. Sometimes it's about coding, and sometimes it's pictures of cats and stuff. Any more questions? The Python Docker module that you said, subs sort of for compose. Will that let you do the compose type functionality of specifying multiple servers, multiple Docker containers and stitching them together? Essentially, yes, but you have to code it yourself in Python. So when you bring up a Docker container with the Docker library, you specify all the networking rules and the hosts and any volumes you want to map into the container. And then you have to do that for each one. So it's kind of like you're doing Docker compose, but you have to go through all those steps for each container, just kind of as you do in Docker compose too. Name, thank you for the talk. What makes in this specific use case Docker necessary and why is it preferable to writing a series of py tests using the request module to test your API? So the application was Dockerized already. But another good reason is because there's two parts to the application, we have an authoring part and a runtime part. And they have different environments. The runtime part is in a high availability environment. The authoring part not so much. So when we do a new release, maybe we do a new authoring release to fix a bug and the customer doesn't want to update their live environment. So we can do a test that the new version can compile the, build the rules and run against the old version of the runtime. And we can automate all those tests of all the version, you know, cross compatibility that we support. Thanks. Any more questions? Well, I guess that's it. So let's thank Neil again for this great talk.