 Olga, tell us about social distancing from our dependencies. Okay, thank you for the introduction. I'm very happy to be here and today I'm going to be talking about a timely matter. Since we have spent all of us a few months applying physical social distancing, we can discuss today how we can use those techniques to make our systems a bit more reliable, isolating our dependencies from them. Before I begin, let me introduce myself. My name is Olga Matuda and I come from sunny Greece, while I currently reside in, as I said, very sunny London. Surprisingly, this week has been very nice. I studied electrical engineering, but then software, I won my heart. And I enjoy messing myself into risk activities, be it public speaking, dancing with our people, or trying new recipes. And nowadays I work at Bloomberg as a software engineer, and I spend a lot of my time in my team debugging C++ code. We do write some Python code as well. We have mostly used Python as a tool for writing scripts to quickly achieve something or for integration or end-to-end testing. But last year we had a crazy idea. How about we use Python for a real service? We wanted to do something simple. And we said Python, it is a good opportunity for us to try using Python for something real that we can use in production. So that is what I'm saying today. This is a story of a Python service, written by C++ developers. And I promise you, it's not a horror story, but it's a story of success and many, many learnings. As I said, we have mainly been using C++ in the past, and we have used Python for simple applications. And this is how we tend to use Python. And this is how we envision our new service is going to look like. It's going to be one function. We don't really need more. We're going to have an input. We're going to do the stuff we need, and we're going to have our outputs there. How hard could that be? It turned out it wasn't that hard. We wrote some completely valid code that looks a bit like what you see on that screen. And we thought that we were ready. Everything seemed to be working. And we had a meeting with our product manager, Mark. He said, is this ready to go to production? And that is where we asked ourselves, okay, maybe we can be safe as we do with all our other services. It's time to write some tests. Yeah, what could possibly go wrong with Python? But okay, let's write some tests. And that is when we realized that that wasn't as easy as we had imagined previously. We didn't really know how to write unit tests in Python. We had written integration systems before, but unit testing was something new for us. We tried writing some, but we had a dependency that ended up misbehaving in the dev environment. So our tests could fail randomly, meaning that we are in constant continuous integration platform with complain a lot, meaning that we would have many emails, meaning that there was no real automation that we could apply further because everyone was very annoyed. And everyone was super, super sad due to that. And we knew it was time to take this a bit more seriously. It was a small project, but again we had to apply the same good principles that we applied as in any other object project. And since we had an unstable dependency, we knew that we had to isolate our environment from it. We had to somehow mock it and make our tests not directly dependent to that. So this is a sort of, yeah, physical distancing from that unstable dependency. As I cannot get into much detail about that specific project, and today I'm going to be using another example. Let's say that we want to create an application that gets movies from, that are currently playing in theaters, like in normal times. And we want to rate those movies based on a very random metric that we have, and that is based on the director of the movie. And we want to send back to that API a rating. So let's say that every movie that is directed by Quentin Tarantino gets a 10 and every other movie gets a three. So this is our great movie application by Get and Rate Movies. Yes, I laugh at my own jokes. And let's start with the simple version of that. Here it's an example of an application that uses the Movie Database API, that is a free API that you can use for your personal projects. And we send one Get request to retrieve all the movies that are currently playing at theaters. So we send, we use the Request module. We send that Get request. We get back, hopefully, a good response. And we put it in a nice dictionary and we return it. That is all that we have to do for now. And as Mark is really impatient and wants it to production, we need to write this and make sure that it works fine. And this is how a test for this application would look like. We want to, we are using the unit as dot mock module, and we want to mock the path that sends that request to retrieve the movies that are currently playing. Above you can see that we are specifying a mock response that can be a much simplified version of what the real API would return. We just return the fields that we are interested in, really. And we have a simplified dictionary retrieved back. That is a very low effort test. At least we don't have much to think about, except maybe for the first one when we are first creating. But this is easily extensible to more tests. We can write more tests where we parameterize. We expect empty results, different responses. We have different side effects when we call that external API. We can expect failures, connectivity issues, and everything else that we can imagine. And that was okay. It was valid code. It was a very valid test. Now let's see what happens when we have a more complete application. This is what we want to achieve in this example. We said that we want to get the movies that are currently playing at theaters. That returns us the movie titles and the director IDs. So we need to send another request to the movie database API to get back the names of those directors. So then we can apply the rating that we were discussing before. So we sent on the second line, we send another request where we fool the movie director's names. And when we have all of that, let's say we want to do some filtering on all of those movies that we have acquired, because we don't want to get busted from that API and that we send ratings for other movies. We want to send only to some of them. So we're using another, let's say in-house API, magic API. And first we do a request to that to do some filtering on those movies. Once we do that, we have a filtered movies dictionary and for each movie that exists there, we want to apply the magic algorithm that we have created that decides the rating according to the director of its movie. And then we want to do a post request in the movie database API, posting the calculated rating we have. In the end, we can just return the dictionaries and save them in database or what we want to do, log them, whatever. That means that for the simple application, we have five different calls into external APIs. Three of them go to the movie database API. Two of them go to the magic API that we have created. This seems to be completely valid code. That's everything that we want. We have our separate functions. Yeah, all seems good. What happens when we want to test it? We patch it up, right? And then philosophical questions arise. How many context managers can you fit in 88 line characters? If we need to mug those five different API calls, and I have a strong opinion on some styling and that's made my life very hard. Thankfully, we can use a decorator. So that would mean that our tests are going to look something like this. We can have... Yeah, I couldn't even attempt using codex managers. So decorators to the rescue. We have four different decorators that we need to carry with us. It needs tests that we're going to write. The first one actually needs to be parametrized because we do two GET requests in the movie database API, depending on the exact request that is called. And as you can see, this is already a lot. We need to write more tests, and we need to carry all of this with us every time. So that test was okay, but what happens... It was okay. But what happens when we want to write more tests? What about when we have different arguments, other responses, empty responses, other failures and exceptions? In that test, we want to test our application and its business logic. It's not the place, really, to carry all of these decorators and test the different APIs. Of course, we want to test them that they are called correctly, but can these tests even provide us that? So it's easy to mix the business logic with these IO concerns. So because the mocks don't really help with that, we will need to do the right integration test. And yeah, that was a lot. It was very easy to forget to move those decorators with you. It was slightly coupled to implementation details. So we had to change our strategy. And as every good developer, when we don't know what to do, we use a search engine and Stack Overflow. And yeah, you use your favorite search engine and you type, how do you write unit tests in Python? And you click on the first result and someone on Stack Overflow always has a very strong absolute opinion that says, every time you use mock dot parts, it means that you have design flaw in your architecture. It seemed a bit offensive in the beginning, but it made us think a lot. This phrase really stood out, and it triggered more design decisions in our teams. So why were our tests so difficult to write? Then we remembered another principle that all of us good developers knew that don't mock what you don't own. Mock dot parts ties you to specific implementation details. But we wanted to test really what our application, if our application is doing the right thing. And we had to make a decision. So we're also both developers discussing Python. And then it came to us as probably we need to go back to our roots and everything that we have been complaining about C++ actually. So in order to write that unit test in C++, we tend to use dependency injection. That is a technique where an object supplies a dependency of another object. Instead of a client specifying what service it will use, something else tells a client what service to use. And combining that with the adapter pattern, that is a term that I have borrowed by High Percival's Python talk this year. I stopped using books, such as a great talk, and goes into further detail on those topics. Yeah, the adapter pattern provides an alternative interface for a class API that you are using and makes it easier to use. So it's converting an incompatible interface of one class to something that is more easier for your code to use. And that is not much different from just a thin wrapper around different functionalities in those external API calls. So this is how a wrapper is going to look like. This is how we can create a new class and we can hide there all the different calls that we are sending to the movie database API. And how we tend to use that in our code. So we said we apply dependency injection. And our main grade application where in its initializer, it can specify the different dependencies of it that are wrapped in classes or they cannot be wrapped in classes if you don't want to do that even. And the code really that we have below is not much different from what we had before. Instead of using functions, we now have to call methods on these objects that we have created these two wrappers, one for the database API and for the magic API that we own. And what happens when we write tests? So now we can use the mock object of the unit as a mock module. And it's much easier to mock as the API that we have created is much simpler. It makes, we have more control over it and everything is decoupled. We can add more and more tests and testing in the future will be much easier. As you can see from this test, everything is much more readable than before. We don't have ugly context managers or decorators. And if you want to take it as a step further, you can even create your own objects but on top of the mock objects and you can reuse them in tests. The disadvantage is that you have to put a little bit more effort in your tests, but the results, even you can see already that is already much more rewarding. So as we can see, we have already much more readable, extensible, flexible tests. There's no danger to forget to paths and a dependency. And we don't have to care about any specific API implementation details. We hide all the ugliness into our wrappers and we have nice interfaces to work with now. We test all your business logic, not the IO. And what really stood out is that it made our design more thought through. So it triggered conversations in the team about how we want to design our production code, our application. And this is nice testing, the implementation details of our main function. But we want to test if everything plays well together, everything is connected. So what about integration tests? What happens there? In Bloomberg, we have a concept that we call imposters, which is, I don't know, also called this verified fakes, which is also a fake API generator that has some verification for IO. So we create fake APIs, but have strong contracts about how they can code. So we can test against them and see that the calls to them go as they should. This is how it looks like. So you can see on the left-hand side how we have utilities and we can create multiple APIs from just a simple fixture that we have created that works well for the internal services that we have. If you need to mock something, you probably need to decide if it is worth doing more work. But in our case, it has proven to be a very valuable tool. So we have one fixture that can just create all of these imposters, and as we said, the advantage of them is that they have strong contracts about how they need to be called. So they need to be called with certain parameters and we can specify responses from them that also adhere to the same contract. We cannot say that we're going to return just a half dictionary. It has to be a full dictionary, but we can specify the fake values in that. So when we make the call, we hit a fake instance of that, not the actual one, not the actual service or API. We can test against, similarly again, all possible and impossible scenarios because users are going to find a way to use a service in the most wrong way. And as you can see in the right-hand side, this is how it is going to look like. We can instantiate this magic service imposter and again, we can specify verified responses this time that we're going to say when we make a call that calls that API, then this is the response that this is going to return us. And this is a good way to verify that your wrappers work well, that you don't call them with random parameters, but you call them as they need to be called. So we have the unit tests that have tested all the details of our implementation. We have a way to test that our wrappers work. And what about end-to-end tests? They should just work, right? Someone who regreted saying that. But it is true. We have covered everything else and now we can just test the happy parts. And how we do that? So in the latest attempt in the team, we decided to try an approach of plug-and-play. Since we have dependency injection, we can create a genetic test and we can pass different APIs to that, that the test is going to use and test against those. So we can pass basically whatever we want. As we saw before, we could pass an imposter. We can pass any other fake that we have created. And this is how it can look like. So in this case, we create a class that is a fake API and another class that is a real API. And in our test class, we can make instances of these classes that we have created. Then we can have a test that is parameterized and we can pass different APIs, real and fake whatever we want. And we can have only, we can write only one test that test is against both. That means we don't need to duplicate our tests and make our life easier and we make sure that those integration tests that we have before are really valid because they can also play when we pass the actual dependency. Of course, then you can choose which test you want in your continuous integration platform and you can decide how much you trust your dependencies for some of them. It can work just running all of them for others who, yeah, you have to make a call. So this talk, what has it been about? There are many values when it comes to good software design. We want it to be functionable. We want it to be common. We want to be robust. We want it to be testable. We want it to be abstract. We want it to be extensible and we can be discussing for hours about all of these things. The important thing is that testable is one of those values. So I have been in a lot of strong discussions with developers about encapsulation, invasion of control. Is it worth to have my tests change the way, all right? And the test initially we saw that it was looking at the codes that we wrote in the beginning was looking fine, but it was our desire to write tests that made us really rethink our architecture. So the pain of writing tests drove our design decisions. And as I said many can complain that I don't want to ruin my beautiful production code just to test them. Well, your production code doesn't mean much if it's not testing and it's not working. Testing is not optional nowadays it shouldn't be. It is a requirement of completing a task. So your tests are part of your codes. You read them, you write them, you change them as you change your code you maintain them. So testability is a good enough reason to affect your design decisions. And this is my thought on the matter. That was it. Thank you for listening. You can find one Twitter here. And of course we are hiring so we have a channel on Discord if you want to chat to any of our engineers feel free to do so. Thank you. Thank you very much, Olga. So yeah, again I forgot to I forgot to mention that if you won't ask any questions there's the Q&A button on soon. There's no questions Olga. Okay, everything was clear. Yeah, I think everyone's software is very testable. That's what I'm assuming. Okay, yeah feel free to chat later to the corresponding channel. I'll be happy to discuss with everyone. I'm going to stop this here now. Cool. I can ask your question. Have you used any in the Penis Injection library to help you with that? No, we haven't. I am aware that there are some but we haven't tried anything yet. We just pre-styled it. Cool. I think Yeah, I think we can just Yeah. I'll be happy to chat later. Okay, cool. So if you have any more questions do drop a message on the Slack channel. Thanks Olga again. Thank you. And I'll see you later.