 Hi, everyone. Thanks for joining me. My name is Rick Viskomi. I'm an engineer and developer advocate on web transparency projects at Google, including the Chrome User Experience Report, or CRUX for short. As you may know, CRUX is a powerful data set containing insights about how real users experience the web. And this data set goes all the way back to late 2017 and includes data from over 18 million websites. This will be a somewhat advanced presentation. So if you want to brush up on the basics, you can visit the CRUX docs at bit.ly slash chrome.exe report to learn about things like metrics, dimensions, best practices, and more. What I'll be sharing with you today are a few pro tips for mining the low-level data set on BigQuery for insights about how users are experiencing the web. So by now, I'm sure you've heard of Core Web Vitals. They are the most important UX metrics we think you should be looking at in 2020. The list includes LCP, FID, and CLS. In fact, CRUX supports all three of these metrics and has months of data across millions of websites. So let's head over to BigQuery to see what we can find. Here, I'm querying the metrics summary table, which is a really quick and easy way to get high-level stats about a website's Core Web Vitals. You can see here that we're extracting the percent of user experiences that meet the good thresholds for LCP, FID, and CLS, as well as metrics 75th percentiles. All of these stats are pre-computed for you, so you can spend more time finding insights and less time writing queries. This summary table is also much smaller and more efficient. You can see it processes only about 100 megabytes, so you shouldn't have any concerns about exceeding your 1 terabyte of free monthly quota. The raw data still exists if you need access to specific histogram bins, but almost everything you need is here in the materialized data set. If you've ever queried the raw data, you'll know that there are several useful dimensions that you can drill down on, like month, device type, and country. So let's look at a few examples of doing that efficiently with the summary tables. The first thing we'll do is modify this query to see how the Core Web Vitals have changed in recent months. To do that, we need to change our where clause to include all releases in 2020 by setting the condition to date greater than 2020, 01, 01, or January 2020. Next, we include the year and month of the release and the select clause, so we can see it in the output. The difference between year and month and date is that the tables are partitioned by date, while the year and month correspond to the table names in the raw data set. And finally, we can sort the results chronologically and run the query. You can see from the results that web.dev has had relatively stable and good user experience this year. But what if we want to break this down by desktop and phone experiences? For that, all we need to do is change over to the device summary table. We'll restrict the results to only desktop and phone results. Now, tablet is available, but it's less interesting. Next, we'll add the device name to the select clause and secondary sort by it to keep the ordering of the results consistent. I'm gonna run this query, but there's one thing I wanted to show you in the results. These percentages are out of all user experiences on the origin, not just the percent of desktop experiences or the percent of phone experiences for boring technical reasons. So one last thing we need to do is normalize these distributions, so it doesn't matter that desktop is more popular than phone. To do that, we just divide the metric by the total. Now we have comparable results between devices and we can see that desktop actually trends slightly better than phone. And finally, what if we want to break this down even further by users' countries? For that, we can change over to the country summary table. For demonstration purposes, let's restrict the results to two countries with very different experiences, Korea and Nigeria, and focus only on desktop. Now, we could write the country code to the results, but I wanted to show you one other cool trick. The crux data set includes an experimental function to map country codes to full names. And the last thing we'll do before running the query is to sort by country rather than device. The results tell a really interesting story about the disparity in user experience by country and BigQuery was able to analyze this in only a couple of seconds and using only about a gigabyte of data. So that's it. These are just a few quick examples of the power of the BigQuery data set and it doesn't have to be mysterious or expensive. I hope you start exploring the data set and finding insights about the state of the web. You can find links to all the resources and queries we discussed in the description and comments of this YouTube video. If you have any questions at all, we have a whole support network set up for you. You can find me on Twitter at Rick Viskomi and I also tweet from at Chrome UX report. We have announcement and discussion groups for important product updates and support. We have the crux cookbook on GitHub where you can find example queries for common problems. And finally, we have crux office hours where we can meet virtually and get your questions answered. I hope you found this useful. Please hit the thumbs up if you did. Thanks for watching, everyone.