 Now that we've seen the creation of aggregate reports and how that turns into a summary report, we will dive a little deeper into discussing noise and the various parameters that impact it. We can start by looking at what is noise? Noise is a random value drawn from a predetermined distribution and added to the true value. In this image, the orange portion of the bars would be the random noise values that were drawn and then added to the true values which would be the blue portions of the bars. The noise in summary reports is added to help protect individual user privacy. It's important that noise is added in a way that makes it difficult to identify an individual user's contributions. The amount of noise added to a summary report is based on a random value from a fixed distribution. It's important to remember the random value is only dependent on epsilon and contribution budget and is not dependent on the aggregate values that are collected. We will see an example of this in the noise lab demo. In this example, we have two buckets with very different true values. In the second image, once noise is added, we see that a similar amount of noise is added to both but the noise has a much larger relative impact on the smaller blue bar versus having a much smaller relative impact on the larger blue bar. It's important to remember that ad text can't control the orange values, which in this case would be the random noise, but they can control the blue values, which is the true value that they are tracking. The larger the blue bars, the higher the signal to noise ratios. Now that we have an understanding of how noise is generated and how it impacts summary reports, we can look at various parameters that can change the relative impact of noise. All of these are parameters that an ad tech can make design decisions around to change their signal to noise ratios. We'll go through each of these parameters in a little more detail to see how they can change the relative impact of noise on summary reports, but we will also see how we can experiment with these parameters and quickly see the impact of changing them in noise lab in the next section. The first two parameters are API privacy parameters. The contribution budget is a fixed value set by the API to protect user privacy. And as you will see in noise lab, this is a fixed value that can be used to scale values that an ad tech is tracking. Next, we have Epsilon, which is another privacy parameter. Epsilon ranges from zero to 64. During origin trial, this is a parameter that ad tech should experiment with. We are looking for ecosystem feedback on this value and we plan to set an upper limit on this parameter that satisfies the ecosystem's use cases. Next, we have conversion data. So in this case, that would be conversions per bucket. The more conversions per bucket, the smaller the impact of noise. So if we think back to the blue and orange bars from the previous slide, if you have more conversions per bucket, then your blue bar would be larger and therefore the orange bar or noise would have a smaller relative impact. Next, we have the size of values. The smaller the values, the larger the impact of noise. Thinking about the blue and orange bars again, if you are tracking purchase values of items that are smaller in value, the overall size of the blue bar will be smaller than if you are tracking the purchase value of a more expensive item. Next, we have number of dimensions. The more dimensions being tracked, the larger the impact of noise. If you're tracking many dimensions, you are more likely to have fewer conversions per bucket and this kind of ties into conversions per bucket and key strategies. Next, we're gonna look at some aggregation strategies. The first one is batching frequency. The more frequently reports are batched, the larger the impact of noise. So for example, if you batch per hour, you will most likely have fewer conversions in that batch than if you batched on a longer timeframe such as monthly or weekly. Next, we have scaling. With scaling applied, the smaller the impact of noise. If you're tracking the purchase value of an item and you apply scaling, this would in some sense increase the value that is being stored for each purchase and therefore the overall size of the blue bars. Then finally, we have key strategies. Course versus granular. The more granular a key structure, the larger the impact of noise. In our example, we had a key structure of geography, campaign ID and product category. If we were to remove one of these dimensions, we would most likely end up with more conversions per bucket or larger blue bars and therefore noise would have a smaller impact. Now that we have an understanding of the attribution reporting API, summary reports, noise and the various parameters that an ad tech can experiment with, I will now pass it over to Mod, a developer relations engineer on the Privacy Sandbox team who will go through Noise Lab.