 My name's Katie Hempenyes and I'm an engineer on the Chrome team focused on web performance and web vitals. Today I'm talking with Chris from the Telegraph. And the Telegraph is no stranger to optimizing for third-party scripts. In fact, we talked about them and their optimization work a little bit two years at I.O. So today I'm really excited to have the chance to hear from them in more detail about their experiences with optimizing third-party scripts as well as touching base on how things are going two years later. Chris, over to you. Hi, my name's Katie. So as Katie mentioned, I'm Chris Bokes, a software engineer at the Telegraph. And the Telegraph, if you don't know, is one of the main news publishers in the UK. I work across the Core Web and Apps team and I'm here to talk about third parties and web performance. So at the Telegraph, we use third parties for many useful areas of our application. The Core Technology team work with third party providers for web content. So that could be comments or the paywall, the editorial team embed content in articles. So that could be YouTube videos or tweets, the marketing team run AB tests, tracking pixels, custom pages. The advertising team have a lot of external requests and advert bidding. And the analytics team have JavaScript includes for tracking behavior. So these are all really useful areas of the site for our end users and for the Telegraph. However, having so many third parties can be challenging for web performance. So usually the third party trips can be unpredictable. They are often implemented outside of the core engineering team. And often internal teams aren't necessarily aware of updates and how those in turn can impact the user's experience of the pages. They can impact the low time of your application. So the content they ship can have expensive payloads like unmenified scripts, unoptimized images, which can in turn hog the main threat of your application. A lot of the scripts are also hosted on the third party's own servers with fluctuating response times. And they can also result in an unstable layout. So areas of the page can be implemented client side by third parties like embeds. And if they're not handled correctly, they can cause the layout shift to shift dramatically after the initial paint. So what do we do about it? The first thing that we did was deferring all third and first party JavaScript. So this is relatively easy to implement, just adding a defer attribute to the script tags. And it has huge performance gains, as you can see by the film strip on the slide. So it's worth noting this will delay the time to interactive metric. That metric is slightly less important to us. It's still important. We want our pages to become interactive quickly, but we're a content heavy publisher. So people come to our site to read the news. There's not as much interactivity. However, if your pages need to be interactive fast, then at least consider using the async attribute and defer the scripts that are less relevant to the end user. So another thing we do is regularly order our third party scripts and tag managers. So it can be difficult to keep track of what's being loaded on your site, who owns that code and whether it's even being used. So as an example, last year we had four duplicate third party pixels where the code in those pixels seemed to be almost identical. After a little bit of chasing up different teams, we found that two of the pixels weren't even owned by the teams didn't exist anymore. So we were able to quickly remove those from the site, which is what you can see in the bottom left there. We were then able to work with the two remaining teams to consolidate those pixel requests down to just one. So we were able to remove 75% of the JavaScript from those pixels, which is great. We also ordered implementation. So we noticed that our time to first byte was quite high. And when we were investigating it, we found that we had a concept to switch off the third party site wide, which was done at the edge using a bucket which had varying response times. So we thought this could contribute to this fluctuating time to first byte. Score that we were seeing. So we refactored the implementation entirely and we were able to get it down from 0.69 seconds in March to 0.21 seconds in May last year, which is a big win for the performance of the site. And that in turn, improved our largest contentful paint metric. So reserving space in the layout for JavaScript initiated UI. So anything that's gonna load on the client side after the initial paint, we try to reserve space for. So an example of that is the ad slot above the header. We know there's gonna be an ad loaded there and we will reserve space based on the average size. And we're doing some work at the moment actually if the ad is smaller, just to center it in the space that we've reserved for it to stop the layout from shifting after it's come in client side. We're also working at the moment to move some of the client side UI to be served on the server instead. Sometimes this is not possible. So as an example of the, there's a more story section of the pages which recommends content to the user. And that's a client side JavaScript widget. So like with the adverts, we'll assign the container a height and even if that height is not perfect, it will still minimize the layout shift for the end user. And the end goal there is to just improve the stability of the pages. So keeping the pages performant after changes can be challenging. It's important to ensure the pages don't degrade when you've worked really hard to improve them. So we have some basic high level, predominantly quantity based budgets which we adjust once we've made changes. So the time to first buy improvement that I talked about in the previous slide, we had quite a high threshold for that before and then we decreased that threshold after we made the improvement. It can be tricky with budgets and third parties because of fluctuations in payload sizes and response times. So as an example, last year, one of our third parties shipped their code unminified. Fortunately, we had some quite high level budgets set on JavaScript size and it kind of went over the budget and it pinged our Slack channel so we were able to quickly resolve that with the vendor. They then minified their script and served it again. So you can see in the chart in the bottom right, it exceeding the budget and then coming back down shortly after. And another one this year, one of our third party vendors was serving duplicate image requests on the Safari browser. Again, we were getting a request from the telegraph site and then we were getting one from the third party. So we were able to then have a dialogue with that vendor, kind of explain what we were seeing and then they were able to kind of fix for that. And the great thing about it is that if you fix it for one site, then you fix it for all the sites that are using that third party. So having a dialogue with the vendor is really important. So only load what you need on the page. So this is another one which is quite easy to implement. If the page doesn't need it, try not to load it. So instead of bundling everything together, try to split it out, try to split your code out into smaller bundles. So on the telegraph site, the article pages are the only pages which have a pay rule and a comment section. So we just simply won't load the JavaScript on the homepage and the news page. And we'll go a step further as well in the example of the comments widget. We don't necessarily know that the user who visits the article page will want to comment and they won't necessarily need that JavaScript. So we'll only load that JavaScript library when the user clicks the show comments button, which means we're not unnecessarily serving a larger JavaScript file. And finally, creating a performance culture. So work with other teams to improve performance. So we have a web performance working group which has representatives from each team. And we'll meet every few weeks to talk about challenges, offer advice, set up dashboards so they can monitor their own code and make improvements together and explain the benefits of web performance for a good user experience. Also allows you to get buy-in from all areas of the organization. I hope that was helpful. And Katie, I believe you have some questions for me about third parties. I do. My first question, I guess there's maybe two questions is how much effort did this take? And also, how long was it before you felt like you could see results? It's a good question. So some were fast and are quite easy to do. So deferring, which was done before my time, that's relatively straightforward. It's just adding a defer attribute to the script tags. And you can see the results really quickly in like a webpage test. You can immediately see what happens when you defer all your scripts. Some were a bit harder. So the time to first bite improvement required an entirely different implementation. So we had to work a little bit harder to kind of get that one through. But I guess my advice would be identify the low hanging fruit in the quick wins that will have a big impact on the user experience and work on those first. And also, we're working on a lot of layout shift improvements at the moment. And a lot of them, to be honest with you, if you're just reserving space, server side is not particularly difficult. You can definitely kind of just keep chipping away at it. And we see the results generally quite quickly when we make these changes. I know you just mentioned webpage test. Are there any other tools that you're using? Yeah. So when we're developing, we'll use DevTools and one page test to get quick feedback on our changes. So as an example, cumulative layout shift, we use the experience section of the performance tab. We'll kind of run an order of the page through that. And it will quite clearly highlight the areas of the page which have shifted layout. And we know specifically kind of what that's contributing to our layout shift overall score. We can also run lighthouse orders in the browser as well. Once it's in our pre-production and production environments, we use speed curve for synthetic monitoring. And that's also where we set our performance budgets. And we use impulse for our run data, which is really important when you look at something like first input delay, because obviously it's a ROM only metric. So we can only kind of substitute that for total blocking time in a synthetic environment. We would like to set up some anomaly detection in impulse with Slack alerts. We haven't quite got there yet, but we plan to do that this year. We also then import that some of the data into Datadog, which a lot of the other teams have access to. And we can just kind of set them up with charts, which are really specific to them, which is a good way of spreading the monitoring around the business. Something you mentioned that I thought was interesting. It's not like you had a scenario where a vendor started shipping unminified code, and all of a sudden it caused the amount of JavaScript that your page was loading to just really skyrocket. And I think you went back to the vendor maybe to get that fixed. Can you talk anything more about like what that experience was like? Have you gone through that experience with other vendors? Yeah, definitely. So I guess the key here is having a good dialogue with the vendors. So you kind of need to know who to contact. Generally it will be down to the team who works with the vendor to resolve it. So they'll either notice it themselves as we have Slack Alerts set up, or we'll kind of gently give them a nudge if it's not on the core engineering team. And the great thing about that is that, as I mentioned, if you fix it for your site, the likelihood is other people are gonna be using that vendor and serve through the same CDN. So if you fix it for yourself, you're gonna fix it for everybody else who uses the same script. And have you felt like this experience has changed at all how the telegraph goes about choosing the third-party scripts that they use? Yeah, definitely. So performance is definitely a part of the conversation now when we're working with any third-party vendor. We try to measure the impact of it before we agree on the kind of implementation and everything like that. So we'll have conversations with the vendor about can we host the script to ourselves? Can we potentially proxy it through our CDN? And then we'll think about how we load it on the page. So in the example of comments, where we're not just gonna throw that script in the head, we're gonna kind of load that when the user really needs it. So we'll discuss the performance of what they wanna add to the site, and also the best way to load it on the page. I think we have time for one more question. So maybe to end, do you have any thoughts or can share how you decided on your performance budgets and then also how they changed over time, if they have, who gets to decide that? I'm curious to hear about that. Sure, so ours are quite high-level budgets at the moment. There's definitely more we want to do with anomaly detection using our run data. Ours are quite quantity-based synthetic budgets. They're not necessarily representative of the user experience as a whole, but they do help us spot any major issues. So in the case of the unmenified script, we immediately noticed that because of the budget and the alerting that we had in place. So the anomaly detection, even at a synthetic level was really helpful. It's currently owned just by the technology team, but we would like to get kind of more buy-in from the product owners to get them monitoring it more closely themselves. Okay, I think that's all we have time for today. Thank you so much, Chris. And now I'm gonna turn it over to Hanny.