 Okay, I'm going to start with a short story of the enterprise software, a story that you're probably aware with. We start with good ideas, we code, we test, we launch, we celebrate, and then, oops, we have new requirements and we have like adjustments to do, and before you know it, you have compromises. You make compromises that you have to live with, and soon enough, you know, your system starts showing some latency and you have to identify them and fix them. The first technique we use is a neat package called Line Profiler. To use Line Profiler, you just use a decorator on the functions you want to profile and you run it through, you're executable through a wrapper called Kernproff. Let's take an example. So here we have like a very long function that takes in a loop, you know, ten times, it calls three different functions, and we want to profile it, so we put the decorator on the top of it, we run it through the Kernproff wrapper, and it gives you this neat output with your code, in fact, in there, and the statistics next to each line. So here, unsurprisingly, we have the sleep 50 milliseconds taking 50% of the time. This is good for a dev environment, but it slows down your execution, and also, you know, the statistics are only available when the executable ends. So for prod environment, we wrote a little class called the chronometer. Chronometer, that's almost all the code of it, so you see it's a basic, very small class. The guts of it is in this function called mark, where you take the current time and you end the previous mark and you start the new one. It puts the information in the logs, so let's modify our little test here. We use a chronometer, and we put little marks, you know, one before the print, one before the loop, and one before the last print, and then we log. It gives you something like this, and here we see that the loop takes all the time, of course. If we had put marks within the loop, we would have ended up with like 30 marks. We want something more like line profiler that, you know, accumulates all the marks, so we have like, we derive a little class from our chronometer called the cumulative chronometer, and here we accumulate all the duration for each mark. Let's take again a little example, and here we can put marks within the loop, sleep 1, sleep 2, sleep 3, and in the logs you'll see something like this, and sleep 2 takes 500 milliseconds, unsurprisingly, and contrary to line profiler here, instead of having a roll-up of all the statistics, you have one entry per function entry, and it's in fact better like this because you can identify edge cases in your service and get the statistic for that particular case. The third tool that we use is matplotlib, in the case where you have the usual tiered service, you have the client browser making a request to the front-end, and that single request can end up with multiple exchanges between the front-end and the back-end service. So what we did is put on the back-end service timestamps at the beginning of the request and at the end of the request, so here let's say that your click talked to the front-end and that translated into six function calls on the back-end, and you get those statistics, and with this little script here using matplotlib, we end up with a graph like this. The highlighted sections are the time spent in the back-end, and since we have start and end time, we can also induce the time spent on the front-end as well, and here we can see that between function one and function two called the front-end did something that we have to diagnose.