 So thank you everyone. Thanks for coming and let's start right away because we don't have that much time unfortunately So yeah, today we're gonna talk about quite interesting topic today. We're gonna talk about why is it important to talk about Well, why is it important to be aware about to pay attention to some interesting low-level details when you're working with not necessarily Postgres with pretty much every database in general First of all a little bit of background. So yeah, I'm my name is Dimitri. I work for Zalandu Well for a couple of years already. I'm longer with the PostgreSQL community and from my contributions You probably used before just in the functions or something from recently like Plugable storage support for PCQL and PgDump and so on Yeah, and in Zalandu we're doing something like that So unfortunately due to various reasons we have to run Postgres in a really really a lot of different environments So it's just a legacy kind of a legacy data centers. It's just pure cloud AWS It's of course inside Docker and said Kubernetes a lot a lot of different environments We're doing this. I must say successfully. Thanks for two open-source project that you can check out on github They're really popular for example Patron is like what two and a half thousand stars in github already and being used By a lot of different companies and postgres operator for example It's our own Kubernetes operator that was this year accepted and it says yes further. Remember successfully completed Google summer of code So yeah, we're doing really interesting stuff And somehow this situation was actually the basis why I decided to make this presentation make this talk This situation that we have to run postgres in different environment because every time it's not exactly a challenge But it's quite frequently quite often we see some interesting Problems or issues that happens on on an interaction between postgres at something else. So let me show in a little bit more details So I'm not sure how many of you are actually using postgres. Well, can you raise your hand if you reason postgres cool And who are who you are just interested in curious? Okay, cool. So yeah Normally when you have database postgres well pretty much any other database too If you want to know what happens inside of course database provides you a lot of different views and informations for postgres Usually they call pg start something pg start activity pg start statements Whatever and usually like I don't know 99% of now 90% much smaller actually This information is enough to figure out what's going on But this information has one throwback this information is basically what this place either the state of your data or the intention of your database Obviously this information cannot tell you a little bit more. Obviously this information cannot provide you something from the outside of postgres itself And now we suddenly realize aha, but we run postgres not in the vacuum We run postgres on top of some operation system and then people start already thinking aha But like we should monitor something here because here it basically kind of a connection postgres interact with the person system Something could go wrong. So we have to monitor it too. Okay. Usually people are saying let's monitor something global Let's monitor like CPU utilization or AI utilization or stuff But then suddenly people realize aha, but we are not only just inside some particular operation system We're honest also inside some blocker container inside see group And already at this point people started to scratch their head and thinking aha What should we monitor in this case because obviously it's another level of complexity and introduce another interesting tricks and Something other interesting situations and sorry know that you're what to do here They're starting to read some strange blog posts some outdated documents and of course sometimes having some wrong decisions And then they suddenly realize that things are even worse They're running this container on top of some virtual machine in cloud provider And they're like totally confused and then as a paramount of this stuff This virtual machine is a part of Kubernetes cluster just one of the nodes and those people are completely lost They have no idea what to monitor and instead of just one nice postgres Can you have a lot a lot of different layers? Usually at this point people are saying okay, we're serious business We don't have time to think about this low-level stuff Let's just reboot our server start our database and try it and hope that everything goes away Of course, usually it's not happening like that and this Few well in the nearest future most of the time this error this problem this issue appeared once again and I don't agree this approach first of all yeah because It's not really it's just a mitigation of a problem But just the symptoms this problem will be will appear more and more and more and more But the second year losing people not you people who are doing this way They're losing really important Knowledge about how their system work and without this knowledge sometimes it could be hard to reason about performance and about your system in general So yeah a little bit of agenda Unfortunately, there is if you wanted there is no plan whatsoever for this slides It's basically a collection of different use cases that I found interesting or useful Where a issue itself could not be trouble suited with it the postgres itself and you have to apply some different approach You have to step back a little bit start to think outside of the box and apply some different techniques So I said that we have to apply some different like information sources So what are we talking about and yeah if you desperately need some plan here is approximately kind of a plan for these slides So first of all and this is quite often well Underestimated by people that's in this way is a source code of the product you're using and I really highly highly try I would try to encourage you to read the source code at least for postgres code because they're just amazing code base It's amazingly documented and sometimes you can get even more information from the postgres code source code itself than from the documentation We did it a few times for example. I did many personally when there was some Script problems between different versions and I just tracked down this change from the source code back in the grid and then found out Only afterwards that there was some documentation for that for linux for example for linux kernel It's a little bit of a different beast, but still all the new stuff like you know, Kyberia schedule or C-group version too They're also decently documented and decently written. It's also nice to read them The second section is about tools that usually administrators know about like S3 as well Maybe not that of a gdb, but nevertheless and perv The thirst source is Basically those visual file system that linux kernel provides us so proc of s ccfs and some others And at the end we're going to talk a little bit of not a little bit significant amount of time We're going to devote to BPF extended BF and BCC so yeah the first example Let's imagine you have a postgres scale relatively recent version 11 12 something and then you run some analytical query And you've got suddenly without it instead of a result you're getting this error blah blah blah could not recite shared memory segment And so on and so forth usually this error could lead to a panic Well for people who don't know what to do But in fact we can really relatively easy straight forward troubleshoot it So here we can deploy straight forward as trace teeny so for those of you who I hope everyone else But just to remind us trace is basically tool that allows you to show all the system calls that your application is doing Not everyone aware that there is a nice in the modern version of a stress There is a nice key minus K that allows you to also show the full stack trace from the application if available, of course For this particular system call So how to troubleshoot this problem we have a postgres we attach to the back end and we start see we start tracing and we see Aha, we open this shared memory segment We and we try to allocate something and we failed and where came we where we came from this to this system call We came here exact in it parallel plan So obviously postgres is trying to do something in parallel and then everything fits into the place in a relatively modern version of Postgres finally parallel workers started to work properly But every parallel worker requires a separate shared number segment and then there is another catch that we see quite frequently Unfortunately docker by default limit deficit shm to 64 megabytes and of course for huge analytical queries. Sometimes it's not enough here we are You may say that it's kind of a hack and cheat because obviously you cannot really analyze errors from within the application itself when the error It's happening So here is another example where see a trace also could be pretty useful and this is more performance related So here there is a such interesting feature called sent engine interesting stuff called visual datastime projects It's basically a feature that allows to perform some particular system calls like well most of the time It's time-related Get time of day or clock time without switching to kernel space, which is nice because then we don't get this heat over switching And yeah, of course, it's give us some performance boost But then the problem is not everyone aware about this that not all hypervisor support is which unfortunately for example most notoriously extend hypervisor and if you're working in database infrastructure, you know that they have different generations and Whether if M5 generation is even KVM where this feature supported M4 for example, which is still in use Is not supporting because it uses XAM So let's imagine you have the situation you have a postgres and you have a different instances typing You want to figure out how much performance heat you've get by this situation For example, you have two different nodes and you see that one database Perform on one out a little bit slower than the other and the only difference is like for example instance type so against super straight forward we Attach in our S trace and whenever we see the system real system calls We're doing with that means that we are really switching to a kernel space and we are having this performance heat Okay, normally I have to admit I'm giving this presentation probably too often So normally at this point I'm saying about scheduling CPU migration but this example is pretty artificial especially in In the case that when we have something more interesting and real talk about it's this vulnerability called memory data sampling if I remember correctly It happened to be this year. Not that not that was like few months already or something. It's similar It's like this one of these CPU related hardware vulnerabilities similar to spectra from the last year And yeah, of course, there is a mitigation for pretty much every disintegration system And obviously this mitigation involves some performance heat So now let's imagine you have a situation You have your postgres scale running on top of some hypervisor and then this hypervisor was patched And you want to figure out how much performance degradation you will get Normally, it's pretty hard to answer it from the postgres itself and that's why we deploy something more powerful We're going to deploy perf And then when we start to just do simple CPU sampling We see that suddenly one system call called one function from the kernel called do system call 64 Suddenly takes too much time in comparison with previous snapshots If we zoom in we found out something really interesting found on this instruction verify w and everything fits into a place after we're reading What exactly this mitigation from the Linux kernel was about and this mitigation this instruction was actually kind of overridden and Before this instruction was doing almost nothing But now this instruction also flushes all the CPU buffers and that's we see that here We see like almost 30% we're spending on this function. That's basically where all this heat arc is coming from our performance heat But then there is an interesting stuff that sometimes you can even notice that in a different way for example, we were running Some Ubuntu on top of patched hypervisor, but Ubuntu itself was not patched itself And we saw the very same situation via Relatively high rays of usage of time sampling spent on native safehole Which usually just say that your system is idle, but it wasn't the case That was super strange to spend few days trying to figure out what's wrong And then it turns out that yeah indeed this mitigation was also inserted to this Safe hold function and here we are that basically the reason how we indirectly figured out exactly in the day when this vulnerability was Disclosed that something's happening Another really nice example why Perf could be really useful Outside of postgres and in fact for not only postgres itself for many other applications is something that called lock holder preemption problem this problem is so severe that even CPU vendors provide some one or that or another solution for this problem most of well for example if we're talking about Intel It's called post loop exiting. Let me show the nice diagram to explain how does it work? So let's imagine we have a hypervisor and then let's imagine this hypervisor have four visual CPUs and Two of them are active and two of them are preempted and now let's imagine that we're running postgres on top of it And now let's imagine that something happened For example visual CPU one some back and there is doing some work doing some select or whatever I don't know how date and then it happened that C2 is waiting for C1 well on the lock or something Normally, of course, it's not a problem because usually for example when we're talking about locks especially spin locks They should be taken for a really short amount of time, but then We're talking about like real hardware like bar metal here. What could happen is hypervisor can say Okay, C1 got enough time. Let's preempt it and give some time to C4 Which running a different background different back and from postgres which is doing something different and now we have this interesting situation What supposed to be really short amount of time for waiting for C2 now is like unexpected amount of time Whatever hypervisor is saying and what's even more? Well, yeah in this situation this post loop exiting basically what does it do? It tries to prevent such idle loops, but it does tries to prevent this by sending Basically an exit from the guest to the hypervisor, which is also kind of a switch from the user to kernel space, which is also performance here Now we have this in mind and we want to figure out how much that affect our performance Here's a two examples here are two examples that I've performed in fact It was performed on my own machine, but nevertheless you can see the results there different So what I'm doing here is the very same setup the very same database. It was wiped out from the scratch With the only difference this database was running inside KVM virtual machine and this KVM virtual machine and the second case PLE was disabled completely And then we were running just pgbench workload, normal read-write workload against the database and then we can see suddenly that With PLE enabled we've got average latency even higher than without Which means that in this particular situation our CPU was so much saturated That's Paul that Paul's loop exiting was interrupting a real waiting in our case and basically doing bad for our performance So it was a negative impact, but of course it could be not always like that and just showing that this feature could be bad Well, there is a pros and cons for this feature. So of you have to measure it for yourself Yeah, here we have to make a little bit of a detour And for the next sections I have to explain Some basics about how postgres works and not only postgres in fact pretty much any storage based database So normally we have some processes that are running like well back ends that are doing some job and then some background Writers and for example check pointers, then we have some memory in of course in pages in the middle Then we have a transactional local rider headlock from the right side And then we have operation system cache and storage. The point is that postgres Basically does all the rights all the I.O. is buffered in postgres 2l Which means we rely a lot on the Linux kernel itself So what happens when we are working with this database? So let's imagine we decided to update something So let's imagine our data was like, you know, our cache was warm We decided to update few pages. Of course first of all, we have to write rider headlock. That's how all this stuff is working Then what happens is that we record this information and now we do not sing this immediately Now kick a background writer kicks in background writer tries with some particular configuration Synchronized from time to time those dirty buffers with operation system cache. So not the storage itself just with the operation system cache Then from time to time kicks in a component Well, basically in an external which is not exactly a part of our postgres, which is not exactly under our control So what does it do it from time to time tries to also synchronize separation system cache with a real storage? Depends also on some configuration dirty background priority or dirty radio and so on and so forth And eventually when we're doing checkpoint we synchronized everything to the storage eventually to preserve the data, of course So what does it mean? It means that even in this schema we rely significantly on several parts for example on Kernel memory management on buffer to your and in general a lot on inner journal So and one more I think the last it nicely, but really nice example for perf How to do the perf for postgres quail is how to check how much performance you can get from huge pages Somehow it happened. I'm not sure if it's true in this audience But sometimes somehow it happened that for example for databases people are not always aware What are the huge pages for how do they work and somehow they're like surrounded by some mysterious? Mysterious around them. Yeah, here we're talking and not about transparent only about classic huge pages So here to figure out this we can use the very same scheme as before first of all read the documentation Link the documentation says that huge pages are good because they are doing to be transaction locos at buffers for misses faster And they are a little bit less happening less frequently So we have this information It's a kind of a theory we have to prove or disprove for our pair for our purposes So here again, we do just an experiment everything's basically about experimenting with your own setup Which because it's basically important. No one can say something for you in advance So here is again, just a simple example simple database on bare metal with the only one difference The first one is using huge pages the second slot and then we're smart now. We're recording with perf. We're recording till be lots and stores misses and Then yeah, we see that in the set in the first case when we have this huge pages We have 19% well, yeah almost 20% less Lot misses and almost 30% less lot misses which is quite nice, which is it's nice not only because We kind of a checked something. It's nice because we checked one exactly component Normally when people are doing some benchmark then trying to benchmark the whole you know pipeline the whole set of actions from the ground to from the client to the storage and of course there could be some other influences here We checked one particular company and we know that it's there and now from this from this point We can derive some latencies Okay, so what we were doing before can be described as Stateless measurement, so we were just having some events. We were touching to those events something happened We get some information and we just forgot about it afterwards but then BPF was well extended BPF introduced So originally we had BPF barely I could filter in it was with us like since 90s or something and eventually just Well originally just the bytecode that we could execute within the scope of our kernel Normally for TCP processing or something like also pretty stateless but then extended BPF Thanks to Alexis there were the force introduced and now we have totally amazing powers now we have Stateful measurement, so now we can respond to some events in the kernel or in application itself So here, yeah, we can attach to any function within a kernel or within application Which is important which allows which just opens a lot a lot of possibilities how to use it We can use registers stacks maps and everything And to not make it just a word for sale. I prepare some demo Let's see if Something go wrong this time. So what happens here? I hope you can see I hope it's big enough So we have several paints here, several windows and the first we have just PostgreSQL running PostgreSQL and then PCQL attached to this PostgreSQL nothing particular Now here in this window we have all the BCC tools available. So First of all, I have to Tell you that of course normally when we are working with extended BPF. It's pretty complicated We have to write pretty complicated to write jit code by itself That's why there are three different tools most important most famous of them as BCC BPF compiler collection that allows us just to write some lines of Python code to generate this BPF program So here in this window, we exactly have this problem this this program and what can we do? Well simple simplest Any we can do is for example, we can train trace Postgres exact simple query So what we're doing here right now we're tracing some query and then when for example execute something like select one We see this query happen So we have some feedback and now we're doing the second time we do in second We see the second time so it could be for example super useful to measure latencies between queries Of course, postgres provide this information for you But there are different props and cons for example this information provided as part of the log information with some particular thresholds and So on and so forth. So sometimes it could be really nice to get Exactly in this format plus well I explained it later But the point is that this information we can also do we can also process within the kernel itself which means that we're much more performant and We can for example filter this information based on a lot of different Conditions, but what happens in the background when we're doing this? Yeah, yeah, it's just it's one of those Python script I can show you it's basically User share BCC tools here. They are it's just a Python script that does all this stuff And that's what I'm going to explain right now. So when we run this trace, what happens? tools Brokelist so BPF it's just a Python script. Oh, I'm sorry BCC. It's just a Python script It uses it generates some C code That was then on the fly compiled by LLVM backend into BPF jittered code Then BCC itself again via perf API created a performance event while user probe then BCC attaches to this BPF program to this user probe and plus also create a map to store some information So here we can see for example this user this BPF program that was created and here we can see map that was created by this program But as we figured out the problem is well not exactly a problem, but this involves a lot of stuff It involves generation on the flight involves some Python interpreter and already sees we've seen some situation when this Let's say performance overhead sometimes could be too much for us because for example Porting Kubernetes is so much overloaded. We just could not even run the Python interpreter by itself But the point is that it's not even necessary. Eventually. We have to basically to get BPF program by itself. So What can we do for that? First of all, we have to before we basically can produce the very same the very same list of actions as BCC is doing First of all, we can create user probe. So let's say then perf probe exp and postgres and then exact simple query Yep, and exact simple query. So now if we will check we have so here We are going we're checking the trace trace FS and here we have different information We can check that we have those events created and Then I already prepared it. I think here. Yep, something called PG Something yeah, so here we have basically some already compiled BPF program from the C source code by LLVM backend with only I will explain the sketch later But we already have kind of a compiler. So what we can do is we can literally just run it PG latency Now it starts. Now there is we can check again There is a BPF program. There is a BPF map So everything is pretty much the same We'll accept some small details like name and so on and this program right now is waiting for some event and If we go for example in doing this Yeah, we will get this. So this program right now is basically doing this It's showing this one single element in the loop, but what's really nice about the stuff What's really curious is that? Before as I showed you before BCC created a BPF map. We can see this BPF map for example Cs FS BPF Maps so maps it was created by me But here is latency said this is exactly my bed we created and normally it's not visible But with BPF we can pin this map and it's really convenient because then we can for example read this map separately Which means that for example we can keep one value or like last recently used values in memory and updated quite frequently But then do not use it and just discard them when we don't need it Which means that we are not going to be for example overload this information We're not going to overload our servers or something and then we can for example just execute PG Latencies reader and then just get this value and that's pretty much it which is nice Which is mean that you this via this pinning of maps we can separate these concerns and it's pretty much nice for monitoring purposes But there is one catch. I think I have this yet So the problem is that not exactly a problem But interesting situation is that to be able to run this BPF program you have to Execute system called BPF which is require one argument Kernel version exact kernel version you're going to run into Well with this program and of course if you for example have different setups different infrastructure have different variety of different links kernel and Compiling on one machine It's could be pretty come problematic because you have to specify literally one here But then eventually I was trying to use originally just this elf parser to change this version There was a problem because elf parser eventually was generating a little bit different binary a little bit different and Hashing so once so eventually I just ended up replacing this manually with xxd Which works nice just you replacing this section in your program and it works everywhere. Yeah, it's it's it could be silly But if it works, it's not silly So, yeah, that's so that was a small trick and yeah The only another thing that I want to mention is that to be careful when you're running this in a bundle because starting from some point I have no idea when they're showing wrong Linux kernel version if with you name they're showing the last patch where it's always zero and you have to I was not understanding why it's not Running why it's complaining and turn south that you have to check the correct version from proc proc version signature Just for you to know. Okay. That was the short demo short example and now let's return to our presentation So yeah, I just said before we have VCC which really nice really popular but kind of a generic So for our purposes for my own purposes I've created something more postgreSQL specific so you can check it out if you're interested and here are a few examples for example We can check last-level cache misses and lots, but per back and per quad which is really mind-blowing Because basically normally we're doing this kind of in a global set, but of course It's not that convenient for example when you have to share your Database with some other clients with different patterns with different data access patterns And this could be really nice For example when you're trying to deploy when you're trying to figure out cash less level cash access patterns to figure out how to use for example this technologies about cash sharing like Intel resource director for example Yep, I mentioned before that and we'll release in that memory management super important and another component that's super important is a buffer to you and Was one component of it is of course right back. So now let's imagine just let's forget about BPF for for a second Let's imagine we smart we know that there is event event performance event from Linux kernel or write back written now We know that it's important to have the monitor to start the monitor with birth And then we see aha from time to time a Linux kicks in and try to synchronized everything with the storage Which is not exactly nice for database which means that it's basically saturating our IO and For example for right and right ahead look it's already a problem for database for modern devices like NVMe like SSD one day You have several cues. It's not exactly that much of a problem because we still have capacity some capacity to write But then things could be worse The point is that sometimes Linux kernel could inject delays timeouts to your process in case when Right back is not keeping up with amount of dirty pages that your applications providing your applications creating and it's really nice Because if you think about this, it's really Journal inject delays in your business critical application when you want to return result within like milliseconds or even less than 90 seconds or something And yeah to monitor this information Unfortunately, there was no event from the perf at least by now. That's why I've created also some small Script timeouts and this is just an example We have just bg bench and serve work load with some relatively big amount of memory And in this particular case we get just for situation when links kernel injected is delays But of course you can imagine on a huge service with like 120 gigabytes memory or more. It could be much worse So probably the last section It's about Kubernetes. So it's still questionable whether it's good or not to run database and Kubernetes But at least part of the answer is that sometimes it's convenient So that's why we're doing this and if you're aware on Kubernetes You can manage resources via manifest when you specify resources requests and resources limit And as I said before for us my memory management is super important And let's imagine we run our database and Kubernetes inside both and we're talking about memory right now So my first reaction when I was going through the stuff was that aha probably request memory quest corresponds to C group soft limit invites and limit corresponds to hard limit invites that was my first reaction but of course with Kubernetes you should be prepared that some obvious sounds are not that obvious usually and The first memory request is has nothing to do with soft limit at all in Kubernetes soft memory limit basically used only for the purpose of internal scheduling You have to figure out for example classes of service and to calculate or I am adjusting something like that so, okay, we figured this out and From this I had another theory. I could I thought okay, then it's cool It means that we don't have this soft limit Which means we don't have for example memory reclaims because we're not going to over soft memory limit And then kernel does not is not trying to reclaim our memory and then of course I was wrong because of the thing that called memory pressure especially for containers Well containers are designed to be Allocated in a way that they already have memory relatively close to what your application is needing Which means that we already close to the highest memory limit We have just by default and which means that we by default have memory pressure will be quite high and just due this memory pressure We have quite frequently this memory claim nevertheless And we already seen situations when it has super overloaded ports with database and we're doing those memory claims quite quite often Especially where when we're on the HR for example out of memory of mql Yeah, and here's nice example that all this stuff well at least most of those scripts You can actually use even with some particular docker container ID nowadays. I think starting from 4.18 You can also use cgroup ID, but it's not exactly correlated for example with docker container ID. So I still have this possibility Yeah The last section is about so I Basically I said everything about that's a super powerful that the problem is that in Every different infrastructure. It's kind of painful to run it still So for example, if you want to try it on your locker only or just laptop It's pretty straight forward You just have to check that your debugger fast mounted and you have to check that those parameters for links are not there But normally they are there and the more modern distributions. There is no No magic here a Little bit of magic happens when you want to run this on docker First of all, of course, there's going to be this machinery with the bug and symbols You have to do not forget to copy them from there or to use them from there because it's kind of separated But then of course, you have to run it docker container with privileges extra elevated privileges because all this BPF and tracing stuff requires extra privileges But then as a nice perk you can attach you can create a separate container separate monitoring container and you can attach something to Some year to your application without, you know, pull using this original container another kind of a trick here is that until well docker uses overlay fast, of course and until well overlay fast or other overlays file systems, but there are They all have one problem until 4.17. I guess they were not supporting you props. Unfortunately, so you have to also be aware about it and Then probably the most of the time I spent trying to run all this stuff BPF on Kubernetes Of course, we require some elevated privileges. We require some elevated privileges from account service account but then Stuff that I spent probably most of the most of the time It's exactly figuring out how to deal with differently in external versions Fortunately for BPF and BCC we can there is a variable called BCC Linux version code Which you can override just on runtime and when only when you know that this particular version is close enough to what you're supposed to run And otherwise there could be some side effects. And of course, it's not nice Yep, and the last section really quickly about how to break because all this stuff It's really powerful and with all the power comes of course great responsibility This particular example I was trying it was in fact, it was quite outdated perf So it doesn't really matter but there was really interesting situation this outdated perf somehow could not handle Situation when I was trying to extract some arguments for example in this case I was trying to extract some information from trigger when This information was null and it was just question and I was just trying to do something and I really Crashed production back end which is of course not that nice Another example is also that well unfortunately software using is not without backs It's also with some outdated perversion and I was trying to figure out how much How how much right a full page rights are we doing from PostgreSQL? And then when I executed this I tried to create this probe and then when I executed this perf stuck Well trying to create this user pop a probe in a non-interruptible sleep in a kernel mode Which means that nothing could stop it. Nothing could kill it and not only that It basically means that all the docker stuck all the machinery stuck and all pretty much everything is stuck And the only thing that we could do it we had to restart the whole date Well in this case it was fortunately replicas so not a big deal, but still And the last part it's already outdated slides, but still nevertheless It shows how powerful and how scary this stuff could be it's from 2018 And it was like Linux kernel version 4.4 quite tensioned already at this point But nevertheless, it's nice. It shows that with some relatively, you know We do not get used to the situation that with Python we could get kernel panic. Yeah, right? So it's kind of a scary. It's of course people are working on this bugs But still if you want to use this in production if you have to be aware and you have to be careful and you have to check Stuff multiple times. So yeah, that's pretty much it. I hope you have a lot a lot of questions Yep, any questions? Come on people Now I don't believe that there should be some questions you decide enough to ask But please be aware that there are no stupid questions. There are stupid questions. So here I am here under the danger. Oh Yeah Because you showed mostly at the very beginning a lot of benchmarks and stuff like this regarding different changes between kernel versions different Patches for CPU problems. It's all do you have something what continuously runs in your system and the benchmarks the stuff as the kernel updates Are released as the new Backfixes are released to get kind of overview. What is good for you? What is not good? So maybe you want to wait with deploying some kernel some bug fix and this kind of stuff Or do you just do it when you get a call from someone that this database is now not working as it used to and then you retrospectively Try to benchmark what may have changed. Well, we're not doing this. Unfortunately continuously We're doing this on ad hoc. So sometimes we just goes into our own because we want to know for example We know that there's a new version of stuff or for example our colleagues produced a new New well distribution and sometimes we're doing this of course that when people are complaining that something is wrong But unfortunately, it's totally different level of complexity continuously performance benchmarking. So yeah We're kind of working on this right now because I have right now after like I spent about a few months already trying to Not try and I successfully prepared Kubernetes setup for benchmarking. It's kind of a continuous It's right all the results with all the plottings into history and so on and so forth But it's like at the very beginning right now unfortunately Yep, any other questions? Okay, then thank you