 Okay, with this, we're ready to move on to our second scenario, which is scenario 1b, third scenario, sorry. So here, remember, I still have on-screen the results for my previous simulation, which went with a smaller daily HPU bill compression count per bucket. And I ended up with higher noise values here, pretty high actually. So what if for my use case, a noise value of 40%, so over 20% is not acceptable? Is there a way as a user of the aggregation service and of the attribution reporting API or other APIs, is there a way I can mitigate that? And the answer is yes. To do that, we can tweak one of the many parameters of our disposal. So here, if you remember, in our previous simulations, we were using a default batching frequency that was daily. So what this means is that as a user of the API, I've decided to send my data to the aggregation service once per day, because I want fast insights. But I can decide to adopt a different strategy. What I can do is I can trade off quick insights against more accurate data, namely data with a better signal-to-nose ratios. So for example, what I could do here is I could decide to batch my data weekly instead. So in a week, this means that I can't expect naively calculated. In a week, I can expect roughly seven times more conversions than in a day. And what this means is that my buckets, my blue bars will be bigger, so I'm going to get better signal-to-nose ratios overall. Let's try that. So remember, this is my current noise ratios for my previous simulations. So hopefully, if I simulate it again, I'm going to be getting better noise ratios. And now we go. So we went down from above 20% to about 5%, 4%, which is in my considering my use case, we were assuming that more than 20% of noise is not acceptable for me. This looks already much more acceptable. And we land on similar results for purchase count. So here we have one example of how an organization that's using the API and the aggregation service can reduce the impact of noise. Now, if one decided that getting hourly insights was important, they would be changing their batching strategy here to, let's say, hourly. But considering all of our dummy measurement data and all of our parameters, we can expect here that this will give us pretty poor signal-to-nose ratio. So let's go ahead and run that. Yes, there we go. This is really, really high. So this makes sense considering the interactions we've just covered regarding batching frequency, average conversion count per bucket, and all of the other parameters at my disposal. In that specific case, there are other things I can do as an API user to mitigate that. We'll come to that later in other sections.