 I want to talk about performance testing. We have a second talk about performance testing later. This is the big picture. And later there will be a detailed talk about performance testing. So, what I want to talk about here is about what we plan to bring to our testing systems in the really near future. We already started to implement this to set it up. We need to now polish it and properly include this into all of our processes. And what we will want to do here with this kind of performance test is to create an actual instance and run it on a good defined hardware set and not change the hardware below it to be able to compare results between different ones of the performance test and simulate a given workload to say, OK, we have a server with that amount of RAM, that amount of CPUs, and there are thousands of clients connected to it and then want to execute a given task, maybe upload a file and measure how long it takes. And this allows us then in the future to compare a future version or a pull request to a previous version or the current master and to being able how different code changes involve into the performance metrics of our system and to being able to say, OK, this change looks good and it's really awesome. But you maybe need to rethink it because it has a huge impact on performance. So it's really about allowing us to have a big picture view on how our system is behaving and not only testing it on a development system with one user or 20 user, but having actually a system with thousands of files, 100 clients connected, and then measuring how the system behaves and different workloads. There are some challenges. We are currently experimenting with different kinds of solving them. One, the very first one is what is a typical workload on a real instance? We have some statistics from bigger customers, from smaller customers, how requests are distributed, how many uploads usually are there, and how many reads are there, how often the client is connected. We have some rough stats, and we try to look into mimic those stats we have as best as possible, but we also plan to get feedback from users, from customers, from bigger installations, how usual setups look like and compare them to our future performance setup. Then the next big thing is how can we simulate this task? We have something in mind, but we are always happy about getting feedback, getting maybe somebody has experiences with this and can recommend some tools to use for performance testing. So talk to me later this week or today if you have something in mind how to do this, and yeah, how to properly integrate this into an automated setup. Best would be to have it run on every request. We have to have a direct feedback, but I guess this is for the far future because it's really resource-hungry and needs a lot of proper setup. So we want to have this first in a very light version to run it only manually, but later more and more integrated into our automated testing and having it monitored constantly. What the current architecture looks like is on the left side we have the GitHub endpoint where the developer then sees a result or gets feedback from maybe a bot that just says, yeah, your request looks fine, but there is a 5% decrease in proctfines on that stuff. This thing connects to an API server. This is roughly implemented. There is already some running prototype that holds all the performance metrics that are gathered while the performance test is run and also provides the endpoints for the GitHub hooks and the integration. So the API server is the one where the actual results are stored and where they can get fetched from. And then we have on the right side our workers. They connect to this API server and say, is there a new task I need to execute? What do I need to check out? Which GitHub repo? How should I set up the server? How many clients do I need to be connected? So this is the rough setup. This currently works. It runs manually for now, but yeah. I plan to make this more automated and scale it a bit bigger to have it in a nicer setup. So if you have feedback, ideas and just want to talk with me about this, just come over after once. I guess the time is too short for questions now too, because yeah. But if you have a short one, feel free to do them. Thanks.