 So in this presentation, I'm going to talk about how you can use web vitals.js, which is a JavaScript library. And it will help you like target and debug certain things that will help you improve your core web vitals. So I'm not sure how many of us maybe some people know what core vitals are and some don't. So I'm going to assume that we don't know what it is so core vitals are a bunch of metrics that are measured on a page that kind of encapsulates the user experience in different ways because a user can experience a page in different ways. First, you know, obviously how fast the page loads. When people interact with the page how fast it responds, if there's any kind of delays, or if things shift around, when the page is actually loaded, right, these are all measured in different ways. And core web vitals is just a subset of those metrics that are defined by Google, which they use as a signal for their ranking. So we'll use some acronyms in this presentation. And these are some of the different core vital metrics so LCP, which is largest content for paint. FID, first input delay, INP, interaction to next paint, and CLS cumulative layout shift. Now if you want to know what each one of these metrics are, you can go to these links. If you have any further information at these links, I'm not going to really explain what these metrics are. So that one. I'm guessing it's okay, you could do that on your own time, but assuming that you know what these metrics are, I will move on with the presentation. So here's what you can expect in this presentation. I think what are the different data sets for core web vitals and a bit of an explanation of each. I will work through the difference between field data and lab data. We'll show you what you can achieve with this library. And also, at the end, I will give a demo of an actual implementation of this library on a live site. The first aspect is setting up the actual library in a site. So there's a video which I will show you in a slide coming up, which is a YouTube video. If you watch that video is about one hour long. It will walk you through exactly how I accomplish everything with set numbers library and getting everything to work, which you will see eventually in demo. First, you need to understand how the data is acquired and what these different data sets mean, right. So when you run. So Chrome user experience reports. This is a this is a public data set provided by Google and this is captured from the Chrome web browser. When you enable the set into send anonymous statistics to Chrome right this is where it's captured. So, and this is data that is publicly available but it's only updated one about once a month. You can, in fact, query this data in BigQuery, and there's a link for that. Follow the link it will explain how you can do that. So there's there's also page speed insights which probably most of us know already. So this is a simulated environment where you could run a page and get the metrics for that page. But very important is that this is a simulated environment right so it, it tries to mimics, you know, a page loaded on a forgy phone with certain browser requirements and so on. So when you run that page speed, it gives you the metrics from the test and it also compares it with the Chrome user experience so I'm going to just call it crux for short so it compares the simulated metrics with the crux metrics. You could also set up page speed insights as a pipeline. I've successfully done this and I'm going to put some project of mine. So every time there is like a new feature you work on and then you create the MR, the pipeline will run and it will give you the metrics and then you could compare and contrast, you know, with this MR increase the metrics, lower the metrics or what right for me that is really helpful to know, because when it increases the metrics, then the pipeline is blocked at that point. And then I send it to like what I call like a core vital engineer, which is me in that case and then I try to figure out, okay, what they're actually trying to accomplish. And how is it affecting the core vitals and if we can get around it somehow, right. If we do that, it will, you know, it will just ensure that your core vitals are at a certain threshold and we don't go over that right. Google Search Console gives you data for your specific site. So if you sign up at Google Search Console, which is free by the way, you can get some basic data of your core vitals. And it kind of gives you like, like a summary of like groups of pages. So if you have like, in your body might have like a blog content type, right. And this is usually done by one blog template. So it will try to group all those blog articles into one, like group and give it and give you the core vitals for that group. So if you improve one, if you prove your blog template, it will basically improve the entire group. And WebVitals.js, this is the library now that if you install it on your site and set it up, you can get real time monitoring. So if you like deploy something, you can see instantaneously, like, you know, if how these metrics fluctuate, right. And it also gives you a more granular view of your website metrics. For example, they can tell you for a specific URL, what is the div or the class or the ID that, you know, is that you need to target. And you will see that in demo also. So what if WebVitals.js, you send the data, you can send the data to Google Analytics. And then from there, you can connect your Google Analytics to BigQuery. And you can visualize the data in BigQuery, but even better is that you can send it to local studio and Google provides like a local studio template for you to use specifically for core vitals. So you just use that template and it gives you all the nice graphs and everything. So field data, there's field data and there's lab data. So field data is the historical report for URL over the course of time. So like a month, let's say, right. So this is, this is very important to understand because this is data from your actual users using actual devices. So when you test, like using Lighthouse on your Chrome browser, you might get, you know, very good results because you use a very fast computer with a very fast Wi-Fi connection, right. And you might think, oh, my site is great. But then this is not what your users actually experience because they might be using a cell phone, they might be passing through, you know, like a tunnel, and Wi-Fi suddenly drops and then comes back and then the page restarts loading. So all sorts of variations there, right. So field data is a measurement aggregation of all of that. The lab data is what you get from page feed insights. And this is a simulated data. So lab data will try to simulate what the most common user would experience. And if you fix things that you find lab data, it will generally fix things eventually for all the users. But it's just something very important to keep in mind that you might still miss things that your users are experiencing, but you might not experience, right. So the main difference between field data and lab data is that lab data is run on a single device and a single network in a single location, while field data is run across, you know, all devices, all networks, all locations for all your users, right. And web.dev link at the bottom will give you a very detailed explanation on the differences as well if you wanted to do further reading. So here's what you can accomplish with webvital.js, which is probably available on GitHub. In your CWv data, CWv data to a Google Analytics for property. And don't think it works with it. Anyway, it doesn't matter if it works with all Google Analytics or not because the old Google Analytics doesn't work anymore. So it sends it to a GF for property. Then you can connect it to BigQuery and pull that data in and then send it to Data Studio. You can deploy a change reproduction site and view your results in real time so you don't need to wait a month to see if a change you made improved things or did not improve something. And then you can also find the exact cause of something that will affect your different CWv metrics, right. INP, by the way, which is interaction to next paint, is a new metric that is still in, I think they call it beta. Maybe I might be wrong on that, but it's a new metric and it's not officially a part of the ranking algorithm of Google, but it will be in March of next year. So it is good to like start working on that now so when it actually becomes a ranking factor, you know, you're ahead of the game. So this is to set up web vitals.js, you're going to need a Google, a GA4 property, a Google cloud project with BigQuery, a Chromium browser, any text editor and some public website. That is it really that you need. All right, so this was the video that I was talking about where it walks through how to set it up. You can link this video in the final video for this channel, but this will be linked somewhere. So if you're really interested in setting up, I would highly advise that you take a look at this video because actually all the knowledge that I got was from this video. And here is the code that. So this is just a regular JavaScript file that you include on all your pages. I'm not sure it is readable for you, but it basically imports the library. And then sends. So the first few lines that will import library, then the second function there would send some that is the function that sends it to Google Analytics. And then you tell it what to send. And then, you know, the switch statement would send different parameters based on the different metrics. And then the oops, and then the last, the last line has important g tag. This is actually what sends the data to Google Analytics. And then you call the function on CLS to send all of you all of the data that you that you got it with a simple, like, bunch of code here you can achieve everything that I'm going to show you in this level, right. Right. So, assuming you did all of that. This is a site that I manage here, classic big dot com, and I've run this test of many times. And you can see that like the scores are all in the 90 percentile, at least for desktop I'm still kind of like working on a mobile. But you can see that I am able to achieve a passing rating for CWD metrics, and it's already 90 percentile. Now keep in mind, this is, this is lab data, right. So this is drawn on, you know, specific connection and a specific location. But the key thing here is to know that you see where it says core vitals assessment passed. That means it's passed for all users. So even for field data, it is also passing, right. All right. So, if someone could give me a thumbs up if you can see some blank graphs. Okay, cool. This is what it looks like in look a studio now. There's a small bug in the template provided by Google. Every time you switch tabs here you need to just reset the dates back to something because it defaults to 2021 I'm not sure why. So if we look at the metrics for like the last 28 days, for example. Now here you can see. Now I know from, I know I made a change around the 18th of August, which is around this point in the graph, and you can see that when I made that change. So my LCP started dropping drastically. So you can see August 17. It was kind of high was almost to the point where it needs improvement and then after I made that change, like the 19th was the next day I could immediately see that that change I made improve my LCP. If I didn't, if I wasn't using web vitals that jazz I would have to wait a whole month to see that data right. So FID, I'm doing pretty well on FID and CLS. I'm also doing pretty well on CLS and it also gives you some more data like how many users you have for many sessions you have on record count which is every time there's a LCP FID or CLS or a LCP FID that is sent to Google Analytics, it counts that as a record. So these are how many of those counts it did in the time frame. It does not she like store every single, every single count. It does like a sampling. Under user analysis. Again, the bug is that it switches back to 2021 so you need to just flip it back to whatever you want here. So this one, I'm still trying to understand how to use this one. I'm by no means an expert in this study way, but I'm still trying to figure out. It is important information but I'm not fully sure I understand how to interpret this information. But the way that I interpret it is, I look for outliers, and this would be where the graph, and let's look at the blue one says where the graph has like a really long extension. For example, tablets for the device type of tablet. But one thing to notice is that I don't have a high, let me unclick that. So if you look at, I have an outlier of a tablet, but also I don't have a high record count for tablets. So that's actually not, you know, a true outlier. But you can see here that I have a lot of mobile users. And, okay, mobile and desktop are kind of like more or less the same there, but by the way, we are viewing LCP metrics right now. So, okay, this is a good example. So I can see here that on Android. Actually, no, this is not Android. So if let's just assume right that this long blue line was Android. And then I can see here that most of my record count came from Android. And then this value was really high. It was, you know, 3.62 for LCP. It means that there's some problem on Android devices that I need to look at. So I need to go, you know, simulate an Android device and try to figure out what's going on with the LCP. Right. So this is how you can really target which device or which operating system is actually causing the issue for your users right. We flipped the metrics to IMP. I'm trying to find an outlier but I don't think I have much outliers. Yeah, there's not much outliers here. Let's check FID. I'm doing pretty well on FID so I don't expect to find any outliers or FID. Nothing here as well. And let's check CLS. I'm also doing well on CSS CLS. So let's see if I find anything. Okay, maybe there's some slight issue with going on on the desktop. I can see that the desktop is really much higher than the others. I don't mind that 0.04 of the CLS values is really good score by the way. So, I mean, if I really wanted to be, you know, stringent about as I could target it but this score is already passing. So I don't need to do much here. All right, so as I was saying, that is how I interpret this results there might be other way to interpret this results that I don't know about so this is still you know something to be discovered, more or less. The part that I really wanted to show you is page part analysis. This is where this is where the fun stuff happens right so I'm going to switch to 30 days. So here's where it tells you exactly what page and what is on the page right so if we look at LCP. Here's the score and here I can see the path right and then here's the highest metrics for for these parts so I can tell right away that this div whatever it is has a really high is causing my LCP to be really high so that I would go in my side find that different and try to see what's going on right. You can actually like click on one of us and it will filter the debug targets for that for that page. Well there's no data there, apparently. Let's try another one. One day today as well. There, there are metric sliders so if you wanted to, you know, just show ones with just high for high percentile ratings. You can use these filters. There is also, you can do a lot of sorting and exporting and so on with with these graphs. Right, but the main thing here is that this this for me the debug target is really like the crème de la crème of why I use this this library right I can see right here. The number one being the div with the highest that causes the highest highest, let's say problem on your side right and then I would start by targeting this one when I fix number one I will go to number two try to figure out what's going on in number two, and then I will work my way down this list right. And that is for LCP so I'm going to leave IMP for last as a special one so I'm going to do FID. Again here I can tell exactly what this contributed to my FID. I can see here that I have something going on and div this div in the equipments dash terms class. So, you know, I could know what is important is that these days might exist on your page. Yes, but then sometimes when you test. You don't see certain divs for and I'll give you a good example like when you have the cookie notice pop up right, because you might have saved that for like 30 days and it doesn't show for you, but it shows everyone else and that might be causing like a layout shift or something. And you will totally miss that, but this is where you will pick it up and debug target right so you can see here that. Well it's not here anymore but one of these divs was actually a cookie notice. And that is really like a popular thing that contributes to CLS and FID and so on. And this is how you can pick up those like those nitty gritty stuff right. CLS if we were to look at that one. Now, this site does pretty well on CLS so there's not, there's no real work for me to do here. These scores are all pretty good. There's no, there's no information for the path, because it doesn't even register. Now, IMP is a very interesting one so I believe that I have not set up the IMP correctly in the JavaScript code that I showed you, which is why I'm getting all here. I think something weird is going on there that I didn't do correctly or just didn't understand something, but you should be seeing information in the debug target, which would allow you to target specific things. This is the one of the hardest ones to actually fix because IMP measures how user interact with the site, you know, across the entire session of the page so it's not just the first click. It could be like, it measures all of the events, I think it's clicks and scrolls and pointer down and maybe some others. And then it picks the highest one from that and that is your actual IMP score. It's almost, it's practically impossible to know what causes your IMP because you don't know what your users are doing on your site, right. But using this library, you can exactly target that. Of course you have to set it up properly so you actually get some values in here. So this is something that I need to still work on. IMP is supported in the web by those.js as far as I know. And then there is the, by the way, you can also like export these and share it as a report to anyone on your team, right. So the revenue analysis, I don't particularly use this for the site, but if you did have your, if you did have revenue goals set up in your Google Analytics. This is where it would show up here and it would kind of show you when you made some change and your LCP drops, then you'll maybe a revenue goal increased, right. So if you had like an e-commerce site, for example, this will make a lot of sense to use here. And you can, I didn't set it up for the site as I said, but you can set it up so that you see your revenue goals aligned with your metrics and you can see how your revenue changes based on if the metrics go up on down, right. And that is about it. Is there anything in this local studio that anyone wanted me to go back on or wanted to see again? Or I could take questions like this. So I think that is the last slide. Yeah, that is the last slide. So that was it. And I hope that I hope that you got some kind of understanding of like why you would want to use web vitals.js and how you can use it on the end results and that you can achieve with it. I could take questions or I could go back and show something if anyone missed anything. All right, so anyone doesn't have any questions I could hand it back over to Suji. Sai is asking, do clients understand it? Sorry, can you say that again? Do clients understand it? Do clients understand it? Well, that is a video that I wanted to do, actually, because explaining web co-vitals to a non-technical person, you know, is like explaining rocket science to a baby. Because no one, if you're not a developer you have no idea what these things mean and you probably don't care what these things mean and you also don't care to improve these things, right? But I plan to do a video that explains what these metrics mean for like a non-technical audience. So to answer your questions, the answer is no, clients definitely don't understand it and don't care about it. Some developers don't care about it, at least some developers don't care about it. But yeah, I would do a video eventually that explains what this means for your site, especially if you're not a developer. One of my things on my list today. I did have a question, Siobhan, and might show my naivety here, but just bear with me. You were saying that you can put these tests and metrics inside a CI pipeline. So could I assume, when you say that, that if you make a change, like you said, and you make a merge request and, you know, it tests what was before and what is now after the merge? So is it, am I right in assuming that would use the lab data rather than, you know, field data? And is there any problems with using that sort of, you know, contrived or lab data? Yes, so it does use, it does do a simulated test, so it's definitely lab data. So how it works is that you set your thresholds in the pipeline, right? So for example, you can say that 75 and above is failure and 75 and below is a pass or whatever. And if your test, you know, goes above 75, then it fails the pipeline, it goes below 70 and passes the pipeline. So you can set your thresholds. These results, this is done by a lighthouse CI pipeline. So it is done in a simulated environment. So the pipeline is all simulated data. But maybe there might be a way to get web vitals data inside of pipeline. I don't know, maybe, but if that is possible, that would actually be great. But I haven't figured that part out. Yeah, I guess like any automation that's in the, in the pipeline is, is good in terms of, you know, better than no automation. Yeah, at least you can tell like, you could pick up things early, you know, you don't have to deploy and then realize after the fact that something broke or whatever will not drop but something affected your metrics, right. So, I give you an, I give you an example of, it's a very common example so like, so the marketing marketing team might ask for to install some A, B test and code, for example, because they want to test something right. And then developers would go and do that install this job so code on the site, and then no one checks these scores, but actually you just added an entire JavaScript library to all of your pages. And that affects all of the metrics, right. So, if you did set up the pipeline and all of this stuff, that will be picked up at the point of the merge request so at the point of the merge request, the pipeline will probably fail and be like, oh well this merge request causes your, you know, your LCP to fail, right. Then it should go to like a, like a, like an engineer that then checks that to see well why is it failing and as well as because you added an entire library right so maybe instead of, you know, calling the library from external maybe you could self host a library or maybe you could just use a trim down version of the library. At that point you will have to figure out ways around it, right, but at least the pipeline will pick it up at that point, because most projects that I've seen, you know, it just market and say the one dot developers do it. And it's out of the window after that. Right. Ivan, Tyler also had a question how does the vitals compared to Chrome's light house. Chrome light house is a simulated environment so as lab data. Web vitals is field data so as data from all of your users. So sometimes you might see things passing in lighthouse, but then it fails for all of the users and you might be trying to figure out why and then almost impossible to figure out why, because you don't know what your users are doing right. So this is where the web vitals the GS library what could come in. So important to remember has Chrome. Sorry lighthouse which you can run from Chrome inspection tool, or you can run from Google page speed insights is all simulated data. All right. So do we have any more questions. Tyler is saying thanks by the way. Sorry. Tyler is saying thanks. Thank you. I'm sorry I'm not able to see the chat. I'll just stop recording now.