 Hello everybody, next talk it will be a lot testing web services at Mozilla with Molotov by Marek Zad. Can you hear me? All right. So everyone speaks French here, can I switch to French? No? Okay, too bad. So I'll have to do it in English, I guess. So I'm going to talk about Molotov, which is a load testing tool. We're writing at Mozilla to do some testing on our web services. And if I have to give it a definition in a single sentence, it's a Python 3 load testing framework that focuses on web services. So before I present the tool itself, I want to take a little bit of time to explain what we're doing at Mozilla in terms of stuff we deploy in the cloud. So Mozilla is much more than a browser. So how many people here use Firefox? Well, it's the fuzz them. So when you use Firefox, you have some web services that are interacting with your browser. For example, if you want to synchronize your bookmarks, it's going to call a web service in Python on one of our servers and stuff like that. And we also have big web applications. So the two well-known applications we have, web apps that we maintain is the Mozilla Developer Network, which is like a pretty awesome website where you can get some help about CSS, HTML, stuff like that. It's usually the one that comes when you search on the Internet about something about JavaScript. It's usually target flow or MDN that comes in the first results. There is also the Firefox add-ons. So basically every time you install an add-on in Firefox or Thunderbird, you get on this website. It has all the add-ons you can install and also has some service to update the add-ons you have installed on your browser. So those are web applications. And then you have numerous web services that we maintain to make everything work. So I can't have the full list here. There are probably more than 50 web services out there to make the whole Mozilla ecosystem work. So one of the biggest ones is Firefox Sync, the one that is going to let you synchronize your history, your bookmarks and stuff like that across your devices. There is another one called Screenshots. Now you can take some screenshots in Firefox and share some screenshots. And there is a new one that's pretty cool. You can share a file that gets encrypted on the client side to someone and it's going to stay in the cloud for, I think, a month or something like that. So that's an experimental one, but it's something that might stick around because people like it. I think the file limit is one gigabyte, so you can share some pretty cool stuff with that. So all of this, those are our web services. And if I had to give a definition of what's a web app, I would say that it's an application that gets called, queries some backends and spits some HTML for the end user. That's roughly the definition I would give about in the web app. On the other side, web services are like a subset of a web application. It's a REST-ish application that most of the time spits out some JSON. And so basically you have an API in HTTP where you can call slash items, slash blah, blah, blah, and use the right verb and get back some JSON, send some JSON and do some stuff like that. So it's not meant to be displayed in your browser, but it's useful when you want to do some stuff with the data you get. So web service, web app. And if I go deeper into the difference, I would say that the web app has a part that gets executed in the browser if you have some JavaScript. It also stores some cookies and sometimes keeps a web socket open between your browser and the server, and it has a specific user authentication flow. For example, if you're on Strava and you use a third-party application, you might do an OAuth dance between the two websites and you have to enter your login password and then grant access to an application. And all that work is done with a user interface. And last but not least, web applications usually have a lot of caching. I was talking about MDM and AMO previously. Those are big Django websites, and we want to make sure that we call them the least often possible because Django can be slow in some queries. So we have a lot of caching in front of Django to make sure that the pages are fast enough. So static files are pushed in CDNs and stuff like that. So web application usually have a lot of caching. On the other hand, web services usually don't have any caching. They're as dumb as possible. They just want to get queried, ping some database, bake some JSON and send it back as fast as possible. So it's pre-isolated, it's pretty simple, and it's meant to be an application-to-application transaction. So you have to have a client that's smart enough to understand the JSON you get back and do something about it. For instance, in Firefox, when you use sync, it's going to do all the encryption on the client side, send some queries to Firefox sync, which are like dumb queries, hey, please store that encrypted blob, and then when you want to synchronize stuff, you're just going to call it and it's going to stay as dumb as possible. And so that's, for me, the biggest difference between a web application that is smart, displays user interface and do a lot of caching, and a web service that is roughly a window to all the data or calculation you have on the server side. And it stays as dumb as possible. And as a matter of fact, the trend in the past two, three, four years has been to implement micro-services, which are trying to make sure that each web service you deploy and use implement a single feature and doesn't try to implement a bunch of different stuff. So what do we test web services? So basically, at Mozilla, when we want to deploy new web services, the first thing we do when we start to do some stress testing is to make sure we understand how the web service works, we want to make sure that we understand its behavior and we understand how things are supposed to work. So we're going to start to implement some scenario that are trying to be as realistic as possible and then start to send some load. And eventually, one of the goals, of course, is to find its bottleneck, but the goal is not to fix the bottleneck. The goal is just to understand how the application behaves and what are its bottlenecks. So maybe we'll fix them, but maybe we won't fix them. It really depends on the cases. But that's basically the reason why we want to test the web services. And once we know how an application behaves, it gives us an idea about how we're going to deploy it. Since we deploy everything on Amazon, on the AWS services, we want to do some sizing, we want to understand what's the best size of VM we're going to deploy on Amazon, depending on the application, and stuff like that. So maybe I should mute my telephone. So when we're stressing an application and we load test it, we have a few things that are happening in the box that gets stressed, the server. We're going to use some RAM, we're going to use some CPUs, and we're going to use some FDs. So FDs are file descriptors. And basically every time your web service or someone that calls your web service interacts with the server, it's going to use a socket, and a socket is a file descriptor that gets open on your box. So those are the three main things, the three main resource you have on the server that is going to get exhausted when you use all of it because you're stressed by your client. And sometimes some applications just are so slow that we don't even have the time to eat all the CPU or RAM. So that's another case. So the goal of the stress test is to make sure we understand when it happens. And we want to make sure that when it happens, the web service is not behaving in an erratic way. So we had a lot of cases where you start to do a load test against a service and everything starts to crash. And then after you did your load test, when you try to use the web service with a single user, nothing works anymore because everything is bored. All the connectors in your web service are in a state where it's not able to work anymore. So we want to make sure that our web services are not behaving like that. We also want to make sure that when the server is not able to cope with the load anymore, we're sending back some clean errors. So usually for web services is all the errors that start with 5.0. So this is a list of all the problems we get when we do some load testing at Mozilla on our service. The first one is, so this is a Python talk. So how many people here implement Python service or application? Please raise your hand. All right, so how many people use Flask or Django? Okay, cool. So basically when you're dealing with a Flask or Django or something similar application, the biggest problem is its lack of parallelism, which means that a Flask process or a Django process will take one request at a time and won't be able to do it in an async way. And that can be a problem for some web services. Maybe it's not, but sometimes it's a problem. And even though Flask has some feature where you can go multi-threaded, most of the time you don't do it because it opens a can of worms. So usually a Django or a Flask application is a single process, single request at a time service. And when you start to send a lot of loads on those, the stack up in the web server in front of them, like NGNX or Apache or whatever, they start to stack up a lot of requests that are waiting for the process to take the next one and you usually get timeouts really quickly. So maybe for some web services it's not a problem, but usually if you want something that scales well, you have to avoid this kind of issue. So this is not true anymore if you're doing some Python 3 and use stuff like AIO HTTP or firmware like that because in that case they're able to accept new connection even though the request that's being respond is still going on. So usually if we need something that goes fast, we ask people to avoid Django or Flask and try to do something that accepts multiple connections. The other thing we often see is IO bound errors. So basically a web service, 90% of the time is just an IO bound application that opens a bunch of sockets to other services like Redis, Memcache or Database or stuff like that. And it has a lot of sockets open to the service and if you don't do the thing right, if you don't manage a pool of connection, if you're not taking care of recycling the connector that gets borked, you get issues like that. You get too many open files, so that's when we load test a web service that doesn't properly recycle sockets. It's going to open a lot of FDs on your server and at some point it's going to spit out some errors like that. You get too many connections from the database that happens when you interact with Postgres, for example, and you're not taking care of limiting the number of parallel connections on Postgres. It can happen that Postgres starts to send back some error saying that, hey, stop it now. So you need to be careful about that. The other one we see often is MySQL server has gone away, so who knows about this message? Yeah, this is the crappy message. The MySQL server will send you back when you're not doing anything with the sockets. So basically MySQL will just shut it down. So if you don't have something in your web service that recycles the sockets when they're not used, you're going to hit this issue. And of course connection timeouts. And the last one we see is when we exhausted the memory. So for example, if you have an application web service that puts a lot of data in local Redis, and it adds more and more data, and if the scenario we use is adding data at will in Redis, at some point your memory will be full, and that's when trouble happens. Depending on the system, you may have places where the OM killer, a little process that run under Debian, for example, is going to look at what's going on because it's running out of memory and just kill your process or even restart the server. So you want to avoid that. And this is the three kind of problems we find when we load test our web service. So what's a healthy web service? A healthy web service is not a service that's going to support hundreds of cases of requests per second or whatever because sometimes you don't need that. A healthy web service is a web service where you understand its limit and which has good enough performances. That's the most important stuff. You know that a web service, if a web service is supposed to handle 100 connections per second and if you load test it and you're able to do 200, 300 connections per second, you're good to go. The other stuff that's super important is to make sure that when you send a lot of load and the web service is resilient, it's able to get back on its feet, which means it speeds out a lot of 500s and when you start to use it as a regular load, everything works as it used to work. So if we don't have that, we don't give the green light to the web service for it to be deployed. And last but not least, you want to make sure, especially at Mozilla, that the service has a path for scaling. Even though we're going to deploy something that's good enough for the load we're expecting, we want to make sure that we know how to scale it. Some service is going to be super simple. We're going to add some new box in Amazon, like okay, we have more users than expected. So instead of having five box, we're going to add 50 box and that's it. But sometimes, depending on the design, if you're not doing the proper sharding on Postgres or whatever, you might have cases where it's harder to add boxes and just have the service cope with more load. So you want to make sure that you think about that when you do the scaling. So one very important stuff. In the past five or six years, I've never seen any web service that worked directly when we load tested it. I've never seen it. I mean, even the service we built, all of them will fail when you start to do some load testing because you always have to have a round of tweaking all the configuration, making sure you do the right stuff with Postgres or MySQL, making sure that your pools are doing the right stuff, tweaking all the timeouts and stuff like that. So unless you're Chuck Norris, I'm pretty sure the tool we're going to use are going to break your stuff. That's mostly guaranteed. All right, so how do we load test a web service? So it's pretty simple, the pattern. We send some load with a realistic client behavior. We don't want to do like crazy fuzzing stuff like sending some, I don't know, bytes super slowly or doing some crazy stuff to try to kill it. We just want to send some load like if it was like regular client. We want to collect some metrics to see what's going on, and we're going to do it again and again and see what's happening. So to do this, you have two ways to do it. You can use a laptop like this, start to send some load to your service, or you can do a distributed test across a cloud where you're going to have like, I don't know, 100 boxes trying to interact with your endpoint. And depending on the cases, sometimes you want to do some distributed load tests and sometimes you can just use a laptop to do everything. And frankly, most web services we're testing, we're able to kill it with a single laptop because web services are IO bound, and as long as you send a lot of concurrent requests, you're going to break it. So you don't need like a very expensive distributed system to break stuff. The only case we have at Mozilla where we need to deploy a lot of nodes to do a massive load test is web push because we have to keep WebSocket connection open for hours to see how the server behaves. So in that case, you can't do it from a single spot, so you have to deploy something. But other than that, just this laptop, as long as I have the bandwidth and the network, I can break any web services at Mozilla with just one tool. Metrics. So one thing that's super important about metrics there are a lot of tools out there that when you start to do some load testing, they're going to tell you that your server is able to do 500 requests per second and then you run the same test from another box and suddenly you have 2,000 requests per second because that other laptop has a better network connection and then you're trying it on your grandmother laptop and suddenly you have 10 requests per second. So the problem is all the tools that provide client side metrics are taking into account the network run trip, the network congestion, and what's going on in the laptop you're using. And I don't think it's a good metric. It doesn't mean anything. I mean, when Apache Bench tells you that your server is doing that many requests per second, this number is not really useful to work and to understand what's going on. What you need to do is to have metrics on the server side. You need to make sure that all your web services you're deploying have a good metric system based on something that is gathered somewhere. For example, on Amazon, we use tools like Datadog that provides live metrics using StatD or stuff like that. And this is what you want to use when you do some load testing. You don't want to rely on a client side metric. So existing tools. Before I wrote Molotov, I looked at what was existing. So we have Apache Bench, but to test web service it's not useful because you can't try complicated scripts unless you try super hard. Apache Bench is just sending some load on HTTP endpoints and it's pretty limited. So this is out of question. There is boom. Another tool I've created that mimics Apache Bench, but the same thing. It's not used to do some complicated load testing. It's used to do smoke tests, like just kill one endpoint. And then you have a bigger tool, like Apache JMeter. So this is a Java application where you have to spend hours to click around to create your load test and then you can export an XML file, re-importing it. I mean, yeah, never say one thing about other technology. I'm not trying to bash Java or anything. I'm just saying that JMeter, the user interface, is not meant to create web services load tests. It's basically a better Apache Bench. So it's not the right tool to do some proper load testing on web services. And then you have Grinder. So Grinder is a Jython framework. It's pretty cool, actually. You can write everything in Jython. So you can create some Python module and have them executed in the Java interpreter. But you have to have a lot of gigabytes of memory to have it running. And, well, sometimes we want to run a load test that's going to call a few endpoints. We want it super fast. So if this test is going to eat like two gigabytes of memory, it's a bit of a pain if we want to deploy it on Jenkins and stuff like that. And then you have another one called Bees with Machine Guns. That's the coolest name. I'm mad that they don't make t-shirts because I'm pretty sure those t-shirts will be cool. So Bees with Machine Gun is an orchestrator that will allow you to do a distributed load test on Amazon. You create a small Python script and then Bees with Machine Guns take care of creating micro-Python instance on Amazon and try to keep your box. So this one is pretty cool. But in our case, most of the time we want to do something locally. So it's a little bit overkill for what we want to do. And then there is Locust. So Locust is in Python. It's using G-Event and 0MQ. It's pretty cool. We've tried to use it, but there are a few limitations. Well, so it's meant to work as a distributed thing. You can also run it locally. But it uses 0MQ, which means if you get a network split you get some issue with the 0MQ pipes because it's going to break and stuff like that. So that's my next slide. Yeah, that's my slide about it. If you want to do something distributed it's super hard to do it properly because you have to take care about what happens if you have network splits and stuff like that. And when you do a load test, if you want to do a load test that is happening for hours, you can't rely on queues like 0MQ. You have to do something that's disconnected. You have to collect all your data using something that's meant for it. And that's why we ended up relying on Amazon to do all those things because now in Amazon you have a built-in orchestrator and you have tools there to do all the orchestration, to do all the data exchange. And I don't think it's a good idea in 2018 to build your own client-server system to do some load testing. I think it's better to rely on what's there in the cloud, Amazon or other tools. So given that, our plan at Mozilla is to provide a simple tool to write load tests super quickly. We want people at Mozilla that are not familiar with Python to be able to write load tests by copying and pasting other existing load tests, so we want to keep it simple. And from there, when we have a load test that's able to load test web service, we just put it in a Docker image. And when we need to do a distributed load test, we use Amazon and Docker to deploy and orchestrate massive load tests that can run like 200 and 300 boxes on Amazon. But like I said before, most of the time, we just take the test and run it on a laptop and break the service and go see the dev and say, hey, I broke your service in two seconds. Now you need to do something about it. So Molotov, yeah, so that's slide 19. So now I can talk about Molotov. So Molotov is a non-distributed Python 3 load testing framework. We wrote using Python 3. And it's super simple. It's simple because we want, like I said before, anyone at Mozilla to be able to write some load tests. It's highly extensible. It's a framework where you can add extensions. It has a lot of concurrency. It's based on coroutine, like the native coroutine we have now in Python 3. So you're able to run tens of thousands of concurrent requests from a single process. It can run as multiple processes. And it's based on AIO HTTP, which is a framework based on Python 3. Basically, if you're familiar with requests, which is a synchronous client to do some HTTP, AIO HTTP client is roughly the same thing, but using async programming in Python 3. So one laptop running Molotov, this laptop, can generate over 30,000 requests per second against any endpoint as long as I'm connected to a real network. So basically, using coroutine in Python, you can break most web services. So this is a simple example. Is it big enough? Can you see? Over there? Yeah, cool. So that's the simplest example. Once you import a Molotov, you have a bunch of... Okay, well, there's a syntax error there. So the scenario decorator is part of the Molotov package. So let's say it's for Molotov import star. So scenario, when you have a coroutine, you decorate it with a scenario decorator, and from there, you get a session object, which is a client, an HTTP client, that you can use to load test your server. And from there, you can just look at the AIO HTTP documentation. It provides a very simple way to interact with an HTTP server. And here, I'm doing a GET on the HTTP Some API. I'm getting back a response, and I'm checking that the status is 200. So that's pretty simple. People that are not familiar with Python or did it a long time ago, at first, they get a little bit worried about all these async words. They don't quite understand what's going on there. But it's okay. I just tell them to copy and paste this example and change their URL there, and that's it. I mean, that's the only stuff to know. There is async def and sig width, and you're good to go. You can do some load testing. And once you have this module, you can do motov example.py, pass it a few options, and bam, you have a load test. And in this example, I'm running 10, I'm forking 10 processes, and each process runs 200 workers, a worker being a coroutine that's going to call over and over this scenario until it breaks. And the dash x option here says, okay, run that, and as soon as there's an error, stop. Of course, you can create more scenarios. Here, I have two scenarios. Do this, do that. The second one is checking for what's going on, the data you get back from the server. And when you create two scenarios like that and you run it with motov, each time a worker has finished with a scenario, it runs, it picks randomly, do this or do that, run it, finishes it, picks randomly the next one, et cetera, in the loop until you're over. And you can put some weight, because here, since it's random, after a bit of time, 50% of the time you're going to do do this and 50% of the time you're going to do do that. So here you can put some weights. You can say, hey, I want 90% of the time I want to do that and 10% of the time I want to do this. So writing motov scenarios to do some load tests consists of creating one coroutine per scenario, specifying the weight of the scenario to be as realistic as possible from what you're using or users are doing with the web service, and then call the motov command line. And it's also useful to... I mean, it's also a good documentation for people because they can look at the load tests here and know that the scenario that's in do that is 90% of the codes that are made on the web service. We have test fixtures, so you can create a function that's going to set, for example, authentication header. For example, it might take a few seconds to build authentication header if you want to interact with some OAuth server or stuff like that. So here, in prepare some stuff, I'm creating a variable called auth where I'm storing auth header. And then in the scenario, I can call back this variable and use it in my header. And this is basically the lifecycle of a motov test. And we have decorators everywhere for you to do something. If you want to create some stuff before the test runs, you're going to use global setup before the test is launched. You can do some stuff per worker. That's the setup decorator. You can do stuff at the session level when a worker creates an AIO HTTP client. And when it's destroyed, you have three different tiered dance at the session level, at the worker level, and at the end of the test. We have an event system where you can create a function that will receive events. Every time something is happening in Molotov, we send a request, a response was received. We have updated the number of active workers a scenario starts, a scenario was successful, a scenario failed, etc. And so this is useful if you want to do some stuff to keep track of what's going on in Molotov. We also have extension. So this is a use case where, for example, you want to dump all your results in a single file. So you can create a module here called csvdump where you're going to implement an event that records every time a request is sent. And you're going to hook yourself in the tiered-down at the end of the test just to dump this list of requests you did. And once you've created this file, you can just use it in the command line with the dash-dash-use-extension option and tell people, okay, every time when you run a load test with Molotov, you can use my little csvdump file and have your test dump some results in a csv file, whatever. You can run it from Git. Let's say you have a Molotov test in Git repo and you want to run it from your laptop or your CI system without having to install anything. You can use molslav, the name of the repo and the name of the test. As long as you have a molotov.json file at the top of the repo, molslav is going to clone the repo, look at this file, find the test, find the scenario and run it for you. You can even add some options like requirements. That's going to pip install some stuff in your environment before it runs with the test. You can set up some environment variable. It means that if I install molotov, I can molslave any repo as long as they're public and it's going to run the test for me. I don't have to install locally the test. I can run it from Docker. I have a Docker image that has molotov installed and run molslave. It's pretty straightforward. You do Docker run, you provide the repo, the name of the test you want to run and it's going to do the same thing but inside a Docker image. We want to do that because in our CI environments if we want to run a load test against a project, we have to do it in the Docker images so we can use that too. Feeder server in molotov tests are simple coroutines, Python coroutines. We have a weighting system for scenario distribution. We have hooks everywhere during the test so you can add some code if you need. We have advanced, we have extensions. You can run stuff from Git and you can run in Docker. So that's level one in your hierarchy of needs in load testing, that's level one. Level two is taking everything I've explained and put it in a CI so you can do some continuous load testing. The goal here is to make sure that every time your developers change something, a web service you're able to deploy the new version of the web service somewhere and rerun your test, your load test to make sure there is no load regression. Speed regression. Performance regression. Okay, thank you, that's it. So just one question. So the logo is from Juan Pablo Bravo. He's doing a lot of cool icons if you want to check them out. It's CC by. If I create a Molotov t-shirt, who wants one? Please raise your hand. Okay, cool, that's good enough. I'll do some. Hi, great talk and a cool tool. Just another one that wasn't on your list. Gatling. Can you speak louder? Another tool that wasn't on your list was called Gatling. Gatling? Yes, written in Scala with Acca actors. Might be worth checking out. Also wondering how easy do you think it would be to extend this for something like database testing? Database testing, like calling some TCP instead of HTTP? Effectively, yeah. Connecting with something, queries in scenarios with weights. Yeah, you would have to plug in another session object instead of AEO HTTP client. If you provide the same API, I don't think it's a lot of work. Cool, thanks. Any other question here? I'm currently using PyTest to do similar things like calling an HTTP API and web sockets. What benefits does Molotov have over PyTest? PyTest is not going to create a lot of coroutines and run them in parallel. In PyTest when you run something, it's meant to run a single time everything. The only difference is that Molotov takes care of running a bunch of coroutine for you and shutting them down. Okay. One of my colleagues is currently working on also using implementing parallel execution in the PyTest. Okay. Then it probably wouldn't give us a big benefit over PyTest. You would have also to deal with your session creation for the PyTest. If you have something that does that, it's pretty similar. Okay. Hi, thank you for your talk. Do you think we can use Molotov to test systems that scale up automatically? I'm sorry, I can't hear you. Can you speak louder? My question is do you think we can use Molotov to test systems that scale up automatically apps when the load is too high? Yeah. Well, you can test and make sure that it scales automatically or you can shut down the auto scaling see how one node behaves so you know when it's going to also scale. So small question. Is there any kind of example of what kind of output Molotov produces and what kind of reporting information can it provide? Molotov is just showing you that something is going on. You get back all the tracebacks when something goes wrong and if you run a test that lasts for, I don't know, 10 minutes you're going to have like progress and stuff like that but that's it. If you want to add more stuff there are hooks to add some more information. But most of the time we run it and we look at what's going on on the server side. Hi. Thanks for the talk. I've used Locust and one of the nice things I found in Locust was the user feedback so when you were developing your test you kind of you started, you see what's going on then you can stop in and it's also nice to show to your end user to have some graphical feedback. How easy would it be or what kind of UX do you have in Molotov to see what it's doing as it goes and any chance of getting a web view of that. So on the client side it's pretty limited it's just a command line and just a progress and we'll only display tracebacks because everything else we do it on the server side we don't have on consoles and data docs so we don't have any fancy user interface because we don't really need it. I guess Locust is better for this and for some use cases better maybe to use Locust because it has that you can show to people and so forth. But for our use case we don't need that so. No more questions? All right thank you so I'll be around in case you want to chat and stuff and thank you. Yeah nice to see you again. Yeah. So you're still doing Python? Yeah, absolutely. I thought that Mike was going to switch completely to Rust and all these nice little new language. Well, not the case. In the other corner it's still Rust and Rust. So there are some audio on your screen. There are some bits that are in there. And they start from like this. It's working. This is another microphone for the... 1, 2, 3. Do you know the general stuff you need to do? Okay so basically one you need to introduce the speaker. Just say the name and say 45 minutes. We'll use 5 minutes questions and then 10 minutes we'll leave and go to the next talk. So, yes, keep getting time and agree them beforehand and tell them what you're sitting and we'll chat in about 50 minutes. So will they have to plan for one hour there actually in about 5 minutes? Yes, actually 4 to 5 minutes to talk 5 minutes with your mic. Make sure they don't know there's two microphones. There's one there and there's one over there. Yep. Any questions? It's important because that's the question to talk to the microphone otherwise it's like... I know that. Really annoying. Yeah, there's a clip microphone. Okay, cool. I'm the only one speaking right here, so one of you use this clip. Okay, yes. And then the second one is this one. Okay, so it's important. And there's water here. Thank you. That's really hard to tell. It's just like... Hello? Yeah, it's okay. It's fine. I used this mic. Okay. I'll just go there. Yeah, there you go. I need to leave. Okay, thank you.