 Okay. So we're here for the last speaker of this session. Last speaker is Michelle Bultrowitz with test-driven development of Python microservices. So can you hear me from what I'm here? Do you hear me in the end? Great. So I will be talking fast because I have a lot of stuff to say. So if I start to mumble just wave at me, okay? Just do this and I know that I need to go slower. So another microservice talk, but from slightly different perspectives. Yeah. So I'm Michelle Bultrowitz. And we're going to be, I'm going to be telling you my story with microservices and TDD. And yet they will be rest microservices. They are fine in some cases. They are bad in others, but you know, like everything. And I'm going to, it's not going to be, going to be much about how to do TDD. It's going to be more about tools you need for TDD of microservices and services in general. So yeah. Who am I? Well, as a professional, my whole life was at Intel. I did some stuff with distributed testing in C-Sharp on Windows. And we had components on Android and Linux written in Java. Then I went on to a project with C++ code for hardware-backed secure channels. But finally, I've managed to get myself to a microservices project and microservices based on platform as a service, actually Cloud Fungy. And it's called Trusted Analytics Project. It's open. So you can check it out. But you know, all the good things come to an end. And I've decided that it was time for me to move on. So now I am an independent Python researcher. So yeah, I am unemployed for some time now. So if you have a few bags of gold lying around and you have offices in some nice place all over the world, you know, like call me, whatever. And I'm trying to get a block together. But you know, it's not there yet. But this is my GitHub. Most of my stuff is there. So microservices, what are they if you somehow, you know, missed them throughout those stocks this year and last year and on all other conferences and blocks from all over the world where they are services, web services, web services, they live on the network. And, you know, mostly in the backends and middleware, sometimes the frontend stuff can, you know, pretend to be microservices. But it mostly isn't, you know, frontend tends to be big. They are micro. And this is hard to define. But generally, they should do one thing. Well, like, you know, all good UNIX programs, they act independently of one another. They are only interested in what comes in, what they then put out. And on their data sources and whole rest of the world doesn't interest them. But for them to actually be able to do something, you know, real, they need to work together. And there are guidelines for creating microservice programs. And actually, there are good guidelines for any web applications. I think I thought that this 12-factor app thing was created in Heroku, but supposedly Martin Fowler came up with it like many, many years ago, whatever. Definitely check out those rules. I think the most important things are that, you know, services need to be stateless. So, if you want some state, put it in a database. They should have one code base, so no branches for deployment to other environments. No, you have one branch, one code for your service. If you want to, you know, configure it to some other environment, configure through environment variables. There you go. Works fine. Word of advice, you know? Word of caution. Don't go into microservices in the beginning. When you start your project, just start with a good old web app. A monolith, just try not to write it in a shitty way with spaghetti code. It would be fine, probably. And just start, you know, cutting out bits and pieces when you get to know what your project is actually doing. Because if you start with microservices, you'll spend way too much time just thinking about, you know, what we will be doing in the future and, you know, what boundaries should we define? It's not worth it. So, yeah, my adventure with the amazing project. Yeah, it was fine. I finally went into doing backend stuff, so I dreamed about it for some time. Sadly, I needed to do a lot of Java, a boot, a spring boot. Don't know if you know about it. I hate it. But I was actually able to create my own Python microservice, and I think that it was doing rather well among other Java ones. And, you know, the project was growing from, like, six people. We went to 70 in some time. So, it was pretty interesting. And, you know, I got those all new kinds of interesting tasks to do. But, yeah, more people, more tasks. I didn't have enough time to look over my little pet service. And sometimes, you know, like, pretty embarrassing bugs would get through my, you know, review. Even my own code, you know, it was just, you know, a small change. It shouldn't do anything bad, but it wouldn't start in production, the code. So, yeah, a few embarrassing moments. And even some of dreaded interns needed to do comments on my code. And I didn't have the time to review them thoroughly because I had stuff to do. And then, yeah, they would break the service. And I would have less time to do my stuff because I needed to debug my service. And it was not well. So, you know, what could I do about it? Tests, of course. Of course, tests. Like, the solution to everything. You have problems with software. Probably you don't have tests. But I had those. And we had, you know, 85% coverage, which was at that time good among our services. But, you know, it was a proof of concept, stuff like that. But, yeah, they were getting pretty complicated, you know, hard to read, hard to maintain, a lot of mocks, ugly mocks for everything. And the funny stuff was with all the, you know, all the mocks and lines of code for tests. They didn't actually check if the service will just run. If you do the Python app, they didn't check that. But why aren't tests supposed to do that? Well, those were unit tests. And unit tests don't check if your whole blob works. They only check if bits of the blob work on themselves. So, you know, I needed something new. And why don't, like, you know, like, start the whole process like it's on the cluster, like in production. And just, you know, call it like it would be normally called. Why not do that, you know, that configure it in a way that the application doesn't know that it isn't in production. You know, no ugly local classes except for real elastic search connections, for example, you know. And it would be even better if I could run those tests locally, you know, before submitting a pull request. Or actually, you know, for the other people, if they could run the test locally before they submit a pull request to me, you know, it would be fine. And you know, with that, I would have a pretty high degree of confidence that the code that was committed would actually run when it goes to production. But yeah, it's kind of hard. But you see, you have this thing that, no, you have this service and has this database, let's say, but your service depends on another service. And this service depends on another service and stuff like that and stuff like that. So how do you test that locally, you see? And it's kind of hard to set up, right? So yeah, about external services, you can deal with that in tests. You have applications that mock out or stop services. They actually, you know, you have stuff like Wiremock, an old Java veteran, which is a process that you can start and you can configure it through ACTP calls for it to start serving ACTP on other ports. So basically to start pretending other services. There's also, I think, called pretenders in Python. It can be easily used from normal-looking tests, but I didn't use it a lot. I don't think that it's really feature-full, but you know, your mileage may vary. And there's also a thing called MountBank, which is basically Wiremock, but better and with more features. And it isn't written in Java, it's actually Node.js, but you can just download a whole binary package for your system and not install any dependencies. So good stuff. And second thing, databases, you know, my elastic search. I could just tell everyone to install it before they run tests, but you know, we had other services. So then they would need to install like Redis for another service and, you know, whatever for some other service. So, you know, junk would pile up on our systems and it was tiresome. You know, why do you need to do a lot of stuff just to get to work on a project? So, yeah, it wouldn't make the tests fun if tests can be fun. I don't know. They are fun for me. So, yeah, there's a thing called verified fakes, but not many people do that. It's something like that. Imagine that you're creating a database and you already, along with your code, you distribute a test double for people to test against your database and you test that your test double actually behaves like your database. This is a thing called a verified fake. And I don't actually know of any people who do them, but they will be pretty sweet in some cases. But, yeah, nobody does them. It's just too much time. But you have, you know, another buzzword, Docker. You can just, you know, you want Redis, just pull Redis, run Redis. That's it. And the only dependency that your tests have is Docker and nothing else. So, good stuff. But, you know, it was supposed to be only on Linux. Now it's, I heard that it's on Windows and OS X, but, you know, multi-platform. So, now it doesn't work as well everywhere. So, yeah, use Linux. Okay. Now we have the basics for our TDD tool. So, let's, you know, grab a broader context. So, our magical tests of the whole application are actually, you know, component tests. Component tests according to Martin Fauer. I stole the slide from him. There's a link at the bottom. Yeah. There's a lot of names for those kind of tests and building microservices from O'Reilly, a great book. They are called service tests. And I tend to use this name. But, you know, component tests, service tests, or in Harry Percival's book, Tests of Development with Python, I think. Those, these are called functional tests. All those things mean about the same thing. And this is TDD. This is all you need to know about TDD. I won't go deep into it. You know, the general idea is that you write a functional test or a service test, then you see it fail, then you write a unit test, you know, write some code, write unit test, write some code, then you have, you know, your functional test passing, you have one feature, then do it again, again, again, again, again, whatever. So, this is the two double loop TDD. I think it's called or outside N TDD. And sometimes it involves a thing called wishful coding. So, yeah, TDD. Do it. But we don't have time to go into details. So, I've noticed that TDD was exactly what I needed to save me in the project. It gave confidence in face of change, you know. I wouldn't need to push to the staging environment to see if everything is actually working, you know. I could just run the test. The robot does everything for me. You know, I could take someone's branch from pull request and even if I didn't trust them to run the test, I could do it myself easily, quickly. So, yeah, I would, I would like, I craved for something like that. One more good thing about TDD, it's, you know, it tends to word of bad design. You don't need to get good design, no, no. But, you know, when you need to test stuff, you tend to write them in a way that is less obnoxious to call. Yeah, so this, but TDD has one, one critical flaw. It requires discipline. You know, it's, you need to practice TDD. You need to think about TDD. You need to live the TDD way, you know. Like, you don't become a Shaolin monk by just, you know, ring up on Kung Fu on the internet. You need to, you know, go to a monastery, live there for a while and stuff. Another thing, oh, okay, no problem. Another thing, but this isn't as a big of an issue as the discipline requirement. You need tools to do TDD. And this, I will give you so, you know, lucky you. It's for implementation of those tools. I created a service. It's called PyDAS. So, Python rewrite of DAS service. And it's not German. It's data acquisition service. One of our microservices. They are all open source, by the way. Yeah, so it was, it was old. It was one of the first POCs and it kept running. It kept getting bugs. But, you know, the interface was well-defined. So I thought that, you know, this can be my, you know, my little guinea pig. I could test all my testing stuff there. And I rewrote the service, basically, using TDD. So it wasn't as crappy as the Java one. Actually, it was rather good. But, you know, from perspective of today, it isn't great. But, you know, it was educating for me. If you look at it, and it's there, you know, on GitHub, check it out. It can be educating for you. Because everything that you will see implemented on the slides, shown on the slides, it's implemented in there. I've also decided to use PyTAS in my test because PyTAS is great. It's concise. It has amazing thing called fixtures. And you can compose those fixtures. Use PyTAS, seriously. Do it, please. Or, I know, maybe there's something else that's great. But I don't know. So, yeah, I'll be sticking to PyTAS. The tests. So do you see this code here? Yeah, it's, this is how your service test, your functional test can look like. And it's rather, you know, rather concise, don't you think? It has this definition. You see here a few fixtures. Yeah, this is a PyTAS test. So if you wonder what is the structure of the fixtures, you can sit here. This is our service fixture and this is DB fixture. This is the test. They depend on other fixtures. But yeah, we will get to that later. But, you know, this test can be pretty simple. You can just, let's say, we put something in the database. Then we just, you know, do an ACP request on the URL of the service. Yeah, some headers because we are also testing authorization because we're going through the whole application stack. We're going to be testing everything. And then we do some assertions. So, you know, they look nice, I think. About the fixtures. I used a trick here. You see, there's this DB fixture. And it doesn't do much. It's just, it's a function scope fixture. So, you know, it will get recreated on each test. And it only yields stuff from here. And then it does flash DB, which is this is, this DB session is a Redis client, which lives for the whole session. And, you know, this might be controversial because I'm not recreating Redis on its service test. No, I'm reusing it because I thought that to myself, you know, Redis people maybe did their job right. So, I'll just clean it after each test and save a lot of time, you know, not having to create an entire Docker image with a database in it. Also, of course, DB session requires a Redis port. And where does the Redis port come from? Oh, from Docker. Yeah. I just set up a Docker client. Don't note the image if it's missing. Wait for the container to start because, you know, you need it to be able to start accepting connections. Then I yield the port back to the test. And when tests are done, so after the whole test run, I remove the Docker container. So, yeah, no trash after tests. And if you're interested in the details of the code, just check out PyTas. Okay. This fixture will represent the our services process, the whole process. And you see there's the same trick probably with the session version of our fixture. So, it will be that, but, you know, long lived. And we also have an external service imposter. And this imposter is the thing that pretends to be another service. And what do we need to implement those two things? We need MountBank, of course, the thing I've told about earlier. But we also need to manage MountBank in our Python tests. So, I created a little library called MountPy. And I think that it's rather nice. I use it in my PyTas tests and hopefully somewhere else in the future. You can install it with pip. Yeah, you know, you can go on the GitHub page, add some stars, you know, download it, play with it around, whatever. And what does it do? It starts MountBank process. It downloads MountBank if you don't have it. If you have it, it will use your system MountBank. If you don't have it, it will just download the binary package. It only works on Linux now. But, you know, if you wanted to work on macOS or whatever, just put an issue or even better, create a pull request. And, you know, when I saw that it manages MountBank process, I also noticed that I can use it to manage other processes. For example, this one of my service. So, it also manages service processes. So, anything that you can start on command line and serves HTTP, you can manage it with MountPy. Okay, this is a big chunk of code, but bear with me. So, our service, what does it take to do? It has a command, you know, in the standard P open notation, so array of string, by the way, waitress. Yeah, I've been using waitress. So, you know, it's a whizgy server like Gunicorn or micro whizgy or you whizgy as someone would call it. And here is, you know, calling MountPy. You just create a service object. You supply the command. Then you supply environment variables because you configure your service through environment variables. And you may notice this port thing here. Yeah, it's a notation that I've used in MountPy. If you don't want to fill out the port for yourself, you can just put this in and MountPy will give you a free port from your service, from your computer. So, yeah, you start the service, you return the object to test, and then after those tests, you stop the service object. So, you know, no trash after tests. Nice. And the imposter. What is it? Well, we use MountBank to create the imposter. So, MountBank is here. Yeah, nothing interesting. Create MountBank, start it, build it, stop it. But with MountBank, we can create those imposters. And I decided for them to be function scoped. So, they are mocks. They remember what requests went into them. So, you probably want to clean them out between tests. So, this can be useful. You know, this is a simple declaration of an imposter. So, I just do, you know, some port on some HTTP path, and this here will just, you know, return 200 for posts on this path, but you can configure it, you know, more precisely. And it's the same trick with destroying it afterwards. So, okay, we have everything, right? You know, we have everything to create our legendary service test that will save our lives. But, you know, it's more than one test and it starts one process, another process, and it starts an entire database in docker containers. So, won't they take long? No. No, this is everything. So, you know, like three seconds. So, maybe if you have tests that last that long, people will actually run your tests. So, profit. To be fair, when it, when it has to download the docker image, yeah, it will take longer. And there's also this thing that when I run them on the first run after my machine boots up, it's like three times longer, I think. I didn't exactly went into why that is. But, yeah, it works. Oh, and this is just pie test. We will get to talks later. Okay, a few warnings. If your service test screws up, a pie test will give you all the standard output. And you have output from, like, a few processes there. So, you will get a lot of stuff. You'll definitely know what went wrong, what's going to be a wall of text. So, be aware of that. And breaking a fixture. For example, you know, my service fixture will also yield crazy big logs. But I, you know, I break fixtures a lot because I tend to refactor a lot. And with tests, you can refactor. And you should also refactor and sometimes bake your fixtures. It happens. It should happen. It's good for you. And, you know, even with those great technology of those masterful tests, they won't save you from human stupidity. For example, I prepare my pythons. I wanted to check if I can actually switch out the old service for my new bright and shiny Python one. And, you know, I've set it up in our staging environment and it just kept crashing on requests. And why was that? Well, I had hard-coded local host in Redis address. Because on my machine, the tests would run because you run those tests on local hosts. And I just didn't notice that. So, you know, they won't save you every time. Okay. So, we have TDD. We have our wonderful tests of our whole application. So, what can go wrong now? Well, other people, because it's never you. We are perfect, of course. Other people can just don't give a shit about TDD and just, you know, crop out some code, put it in a pull request. It's now tests with 130 character lines with Java variable names. And it's bad. So, what do we do with them? Well, we have few weapons against them. We have the test coverage stuff. We have static analysis. And we have contract tests. Oh, sorry. I've almost drowned myself with a bit of water. No problem. Okay. And we have contract tests. And, you know, let's go over each of those. Yeah. This is my coverage, a short version of my coverage I RC from Python. If you don't know what it is, you should totally look up coverage library for Python. And I have this thing. And this might be controversial because I say that the tests fail if you don't have 100% coverage. And, you know, why would you not have 100% coverage? You do TDD, right? And is this one piece of code, you know, not worth testing? Do you don't want it working? So, if you don't want it working, maybe just, you know, delete it. But, you know, one argument that I hear from people is that, you know, but if I want to have 100% coverage, I need to do those, you know, brainless tests that only just do mocks and check if I call five mocks in a sequence because it's the glue code. You know what? You don't have to do this. Because you have this. You have this. And this is one of the most amazing things in Python that I found in a while. And this is an option of the coverage library to allow you to get coverage from other processes. So, you can actually get coverage from your service process that serves a real HTTP request. And it can be added to coverage. Great. And if you want to check it out, how to do that, check out PIDAS and check out this documentation link about studies analysis. Yeah, I just put it into TOX. If you don't know, TOX is a great test runner for Python. You should totally use it. The link is down below. I just, you know, run my test, check my coverage, and then just fire out, fire piling. If piling finds anything, it's going to give non-zero return code. The tests are going to fail. So, if you have some bad stuff in your code, the tests will fail. Yeah, I know piling isn't perfect, but if you're absolutely totally sure that you're not doing anything bad, and most times when piling says that, you know, seven method arguments are too much, it is too much, but if you're absolutely totally sure that you have to do that, you can just add a comment, you know, piling, ignore whatever. So yeah, that and with that, normal crappy code before review. So more time for me, more time for you to do actual work. Contract this. Yeah, what I am, kind of a big thing. They keep your interface, you know, in line. And we have this wonderful thing called swagger, and it's, you know, a language to describe the REST API. You see, you have those paths, there's a get method, there are parameters, they have names, they have types. There are responses, for example, code 200, they have, you know, there's a type for the response, all of that stuff. Oh, by the way, you see here, here, I have those little, you know, apostrophes or whatever, are they called? Yeah, you could just do 200 as an integer in swagger, but Python's YAML parser will complain about it. Yeah. But what does swagger give us? We can have the contract for service. Contract is the thing keeping, you know, our whole microservice platform running, because, you know, people have expectations about your services, and you have expectations about other services, so we need to keep the contract tight. And with that, you can keep the contract separate from the code, because at times, you can do just, you know, somewhere, you know, two levels deep, three levels deep in some models, you're just, you know, you'll add some optional field. But actually, if you've just changed your contract and your normal test may, you know, may not pick that up, but it isn't something to be done lightly. So watch out for that. What are you going to do to do something with swagger? We're going to use Bravado, which is a client from Yelp. Thanks, Stefan. And it's create, it dynamically creates clients for your swagger service. And it also verifies a lot of stuff, so, you know, parameters that go in, the things that are returned from the service, so you don't need to check the type, let's say, of the data objects that get passed. It's done automatically for you by Bravado, and it's great. It also, you cannot do such thing like, you know, I've returned 201. I was supposed to return 200, but whatever, yeah. Bravado will beat you for that. But it's configurable. It doesn't have to be as harsh, but, you know, whatever you want. Usage, yeah. So in our service test, we can just swap Bravado, yeah, client, Bravado swagger client for request, and our service test now also double as contact test. And, you know, I thought to myself, but we shouldn't cover the whole API with service test. It's, it doesn't work that way. You should just, you know, use service test, like, integrated test for a few critical paths, but say you have another, like, quirky stuff in your API, then you should use unit test for that, but it would be sweet if we could also verify them with swagger. So I did a little library because Bravado is extensible. I've created an HTTP client that actually uses a Falcon test framework, and, you know, for all web frameworks like Flask, Django, whatever, you have those test frameworks, right, to do unit tests of your APIs. And I did a little library. You can also download it with Pip, and you can look at it. It's used for integration of Bravado and Falcon, but you can look at the code. It's not that much, and you can do it with Flask or whatever. So now, you know, now all our tests that touch the API are also contract tests, so pretty sweet. This is how the test can look like. You see here some imports. You do the client in the spec. Specs is just a YAML file. Here you, you know, insert the HTTP client, the Falcon HTTP client that needs to be, you know, filled up with an API object. It's the same as app object in Flask, for example. And it's just, you know, you do a request. You don't need to worry about what's the path, because the client will know that. Here, you know, you specify the parameters, for example, the body here. And worker, worker could be something like a path parameter, and Bravado will check all of that. You see here, you need to do a result because the interface is also accommodating asynchronous requests, but, you know, there's a thing and just assert stuff. The service tests, you know, you see, here doesn't look much different. Normally, you wouldn't do that. You know, don't double the same test in service tests and unit tests. But the thing that differs here is that you still have a spec, but you don't, you know, use a special HTTP client. Here I'm just using this standard synchronous HTTP client, but I need to give a URL right now, because this is a real service process that we are talking to. You can also specify some request options additionally. And so, you know, like those are the contact tests. So now we should be fine, you know. Our service is taken care of. It's just working perfectly. It's 100% coverage. The tests actually check if the service runs. You know, we won't mess up the contracts by accident. So great. So our job is done. We can go home right now. No, this is just to do a one microservice. And you know, in theory, all these things should run because you know your contracts, you abide by your contracts, everything's fine. But the world isn't as simple. There are things you won't see in the future. So we need end-to-end tests and exploratory tests for your whole microservice system. And they are hard to do. And I don't have the time to talk about them this year, sadly. So yeah, here are a few sources. And thank you. No, nothing, seriously. No, it's not about the pinchers. So everybody wants to leave. Oh, by the way, two, I think, good talks that relate to my talk this year. Hi. Thank you for the talk. I was guessing if you tried, instead of using just Dockerpy, using Docker compose in the unit test, so it just rise up the whole infrastructure and you can run your application inside a container. So, I mean, you may have even the same name and the same replicated environment. Yeah, I could do that. But actually, our applications didn't run in containers. So they just needed like one or two containers for them. So, you know, in the example of one container, it's easier to just do that. And the fixtures can do stuff like download. I think that Docker compose can also download the images. Yeah, but no, it works. It worked well in fixtures. And if I would want to use Docker compose, then I would need to wrap Docker compose in a fixture because I didn't want people to have to do anything. You know, it's just get clone, install Docker, install talks and just run talks. And this is everything that you need to do to run those tests. So, pretty sweet. You don't need to install mount a bank. You don't need to install anything else. Well, you need it to install Python three. Sorry, you're welcome. Anyone else? Okay, let's thank Michelle and let's go for lunch.