 Hello, everyone. Welcome, welcome. Glad that you are here for the session. Can everybody hear us with the audio setup? Fantastic. OK, so we are here to talk about web performance and core of vitals. We'll do some quick introductions. My name is Brendan McNamara. I'm from Google. I work on the Google Chrome team. And for Chrome, I work on our open source support. And so mostly working in the CMS space, also working with some commerce platforms and some CDNs. And a consistency that we see from the browser perspective is that performance is essential for any user regardless of where they are accessing the web. And so for that reason is why we're working with CMSs and Drupal in particular in order to improve the web performance experience for end users, as well as developers who are bringing performance solutions to different customers. Yeah, and I'm Yanis. I come from TechOne Consulting. We teamed up with Google to improve Drupal's performance, both front and on and on the back end. Yeah, I've been with Drupal for a very long time. I love performance. I used to work at Examiner.com, which used to be the biggest website on Drupal. And since performance is dear to my heart, it's also interesting to work with problems that presented to us. And so we wanted to walk through a couple things. So we wanted to give an overview of just what web performance includes and what are the core vitals. This is just a way of quantifying a user's experience so that we're able to then isolate what are the factors that can improve that user's experience. And then we wanted to bring that specifically to the Drupal context. And so what things can sites that are built on Drupal do to improve their performance? And essentially, why should they do that as well? So what's the association to the outcomes that your business or a nonprofit or a government, what outcomes are they looking for to drive through their site and how does performance have a part in playing to those objectives? We also wanted to talk through, as Yanis mentioned, the ways that performance has been supported in the Drupal space and some of the collaboration that our two teams have been working on. And we also have an exciting project that we have been contributing towards, which is making automated performance testing available through Drupal. So we want to associate how core vitals, yes, they're important, but how do we action upon that? And this feature that we've been developing is the way to do so. And then we'll go to questions. Okay, so first I wanted to include a link that will bring you to an asset that the Google team has created. So this is for web.dev slash vitals. In the talk, we'll go through the overview of core vitals, but I wanted to point everybody to this place because this is where you can see many more assets about performance. So if you wanted to see case studies, if you also wanted to see the technical documentation, this web.dev slash vitals page will bring you there. And I recognize that for many developers, a topic like performance is something that you understand. You agree with it, you think it's the right thing for the user and the right thing for the organization, but the challenge typically is making that argument internally. And so of course every organization is resource constrained. And so it could be difficult to prioritize different advancements to the site, performance being one of them. And so many of the case studies that we've articulated on this page will prove and connect where's there a business connection to web performance. So you can help make a case internally for why investing in performance is good for an organization. Okay, so from the Google side, performance has always been a challenge on the internet. For the Chrome browser team, we use over 200 metrics to evaluate performance, which is quite a lot. I mean, think for anyone who's a developer, 200 metrics is going to be more than what's really digestible and also more than what is actionable. And so in 2020, Google launched the core vitals, which was a way of quantifying user experiences really around three different focus areas. And the intention behind this is that we wanted to create categories again that are understandable as well as actionable and provide a unified way of understanding performance for any developer. So if a developer is working with an agency or with a brand, nonprofit government, you can all use the same metrics and the same language to improve overall performance. So the first component that we considered is first thing that happens when you go to a site. So you click on a link, you go to a new page, and how quickly is that page rendering? So it may take quite a long time. We wanted to identify what is the time that's required to validate a good user's experience. So the first metric that we'll go into is around loading. The second is around interactivity. So once you are on that page, how responsive is that page to the actions that a user is driving? We've been able to quantify that again to identify what is the right threshold for the expected user interface for interaction. And then last but not least, we call this, does the page spark joy? This is speaking to stability. So are there movements that happen on the page that are outside of the user's expectation? So you think about when you're looking at an article and later on there is maybe an ad that's been rendered to the top of the page and that may shift the entire content for a page down. It's not an action that the user themselves is causing the page to take, but it's still going to impact how a user is reading that page and so you may lose your place. Same time you may go to click a link but then all of a sudden the page is shifted so you're no longer at the space that you had expected. So through this final metric, we wanted to quantify what shifts while expected for a page lead to a disruption for the user's experience. And then let's go over to the technical terms for the core web vitals. Before going into each of the three, I wanted to point you towards the bottom with the traffic light that we have, so the green, yellow, red. For each of the vitals we've identified what does good look like. A real motivation for the core of vitals is to have again a quantified method for what creates a good user experience. And for a page to be considered passing core web vitals, that means that 75% of the origins have met the threshold that qualifies as good. So for this first metric for loading, we've identified that 2.5 seconds is what makes a good user experience for a page to load. And for a site to have 75% of their origins that meet that threshold, that is what would quantify as a good user experience. So a little more information about this first metric for loading. Largest contentful paint is the formal name. This is the technical name. But what this is identifying is the primary content for the page. So it could be the largest image or it could be the most substantial block of text. What the developer identifies is most critical for that page. How quickly is that content rendering to the user? And again, if that happens within 2.5 seconds, that's what we qualify as a good user experience. The second metric is for interactivity. And I'm curious from the group, is this metric INP interaction to next paint something that people are familiar with? Is this one that people have seen before? The term, some mostly know. And the reason why I'm asking is because it's a new metric for interactivity. So just a few weeks ago, Google had announced that this is the metric that will formally become our interaction metric. Previously, the metric that we were using was first input delay. So FID was the name of it. And the reason why we changed over to this is because for interactions, 90% of a user's time on a page will happen after that page has loaded. So if you are going to a menu select section or you're pressing a button, you as the user expect that to respond pretty naturally. And so you expect the menu to open, you expect the button to open up new options. But if you hit those pieces of the page and they're not responding to you, that becomes an issue. The previous metric that we were looking at, first input delay, this was evaluating the interaction for the first input that a user would make on the page. Interaction to next paint will take into account all interactions that happen on a page. And so this is a more holistic metric. And while something that Google announced is the formal metric just a few weeks ago, this becomes formal in March of 2024. So there's little less than a year for this to become the formal metric. And back to the web.dev article that I mentioned earlier on, we have a ton of collateral that's over there that explains more information about interaction to next paint. And then finally for the metric of does this spark joy? So this is regarding stability for a page. So once the page is loaded, is the page only adjusting based off of a user's interactions. So this is again the example for where an ad could render on a page and that would shift the content for the user. We wanted to quantify when a page is shifting, how much of that is non-disruptive to the user and then how much of that actually will impact the user's experience on that site. And what we saw was that, you'll see in the green section is 0.1 indicates 10%. So if content is slightly moving on a page, then that's something that doesn't interfere with the user's experience on that page. But when you have say 50% of the entire content removing itself from above the fold and going to below the fold, then that's when it does have a bearing on a user's experience there. All right, so now getting more into the Drupal section, this asset that we have here is coming from HTTP Archive. This is an open resource that's available and it analyzes web data and shows you the performance metrics for that information. This is pulling from the Chrome user experience report also known as crux and it's showing you what the core of Vitals passing metric is for different technologies. So the view that I have here is from our technology report and it's showing us that the bottom blue line is the entire web. So for the entire web, about 40 to 50% of the time, the entire internet is passing core of Vitals. Drupal, however, is pretty substantially above that. So Drupal itself is a fairly performant platform, but because the web is in a position to improve overall performance, Drupal has a key role in playing that. So especially as a leader in the performance space, there's ways that we can take learnings that Drupal's been able to apply. It's bred that more largely to open source environments, but there's also much more headroom that can be made. So if 50% of origins that are made on Drupal are passing core of Vitals, that means that of course there's 50% that still have that capacity to optimize. One other interesting note about the technology report, you can see at the very top where it has the technologies that are selected is you can really break this report down into a couple different dimensions. And so right now I have it filtered for Drupal technologies, but if you wanted to compare other technologies that you're using, you can do that. So if you wanted to see how maybe site builders or themes or hosting platforms or even like JavaScript libraries or you wanted to see how CDNs are performing, you can do so. And that's a way for you to understand what within your tech stack, how is it passing core of Vitals and which component of your tech stack does have the capacity for additional optimization. And this connects to the web.dev link that I mentioned earlier, but why does performance matter? So aside from having a bearing on the user's experience, this also has a connection to the business outcomes that you'll see. So we listed here a number of optimizations or how the optimizations do impact an organization and I encourage you again to reference that web.dev article. And the reason being is because we have different case studies that speak to each of these KPIs and so you can see how performance and loading interaction and stability that all will impact different business outcomes. And so for your site, if user engagement is what's most critical, there's a way that you can associate certain performance optimizations towards that outcome. And then the final note that we have here is about how these changes can be made quickly and easily. And the way we do that is through another tool. So this here is PageSpeed Insights. And this is another one, I'm curious. Have people used PageSpeed Insights beforehand? Yeah, so it looks like a good amount of people have. So everybody should be pretty familiar with this. But the way that this tool works is that it's pulling in information from two sources. So the first is Google's Lighthouse. Lighthouse uses lab data for measurement. And then the second is from the Chrome user experience report and this is using live data from how sites are in a live environment. But what we see here is you can include a domain and then search for that domain and it will tell you the core of vitals outcomes for that domain. So for this example, I pulled just the conference site. So events.drupal.org slash Pittsburgh 2023 and we can see of the metrics where is the site performing well and then where is their space for optimization. And what's most critical about PageSpeed Insights is not just the observations that we can make but the actions that we can take out of that. And so if you're optimizing a site for yourself or for a customer, you can go in here and then observe again where core of vitals are passing but then also where their space to improve. And so right down here, we can see that okay, so eliminate render blocking resources. We also have an estimated impact that you would see from making this optimization. So particularly when dealing with limited time and you wanna maximize what the outcome from your time spent while working on performance. This is a great way to go in and see what specifically is impacting performance and then how to act upon that. As you'll see, we also have specific recommendations that are associated to the site as a Drupal site. So this will refer you out to ways to actually make the benefit from here. So render blocking resources and how you can make that adjustment. And then additionally as an example, wanted to pull and out of the box Drupal site. So this is a site that's not live. So this is a templated Drupal site and what we see here is that in isolation it's passing core of vitals pretty clearly, pretty strong. But it also shows us that there inevitably are other places that we can still make performance optimizations. And what this helps is related to the project that the Google team and Tag1 have been working on by showing us where the core platform is in place to make improvements. So if we're making an optimization to the overall Drupal core platform that has resonance to every site that's built using the platform and by seeing this out of the box site, it's good to know that for one, Drupal is very strong in performance. But then specifically there are other ways that we can do say image optimization to improve performance for all sites built on Drupal. And then so connecting to what we're working on Drupal core, we also wanted to talk through different optimizations that our teams have been working on to improve Drupal writ large. So pass it over to Yanis. Thank you. So we worked on improving core of vitals results directly in the past. And that project is, as you will see, more or less done. And now we are working on another project. So we are continuing our collaboration. And with this first step, we focused on the lowest hanging fruit to improve core of vitals for all Drupal sites. And we focused on lazy loading. This is one of the good ways to improve those metrics. So as part of that effort, we added the option to the image formatter to enable or disable lazy loading of the image. It's this selection down here where you can select if an image should be loaded eagerly so immediately or it should be lazy loaded when it's needed. And this was committed and it's already in core since 9.4. So you basically got it for free if you updated to something that is more recent than that. And the recommendation that we would give is use lazy loading for all images except the one that is most prominent on the page, on the page load. So like a banner or the top or something like that. But for everything else, you can enable lazy loading and you get the benefit practically for free. And then related to that, because that first one was like for standard image formatter, but we also have these responsive images where you can use multiple sources and then let browser decide which one to use. And we also added the same configuration to the responsive image formatter. That feature was committed and it's coming out in 10.1. And the guideline is the same. Use responsive everywhere except for the most important or the most topmost image on the page. You can wait until 10.1 or if you go to that issue, 3191 and that all, you can take a photo and you can patch and start using it before you upgrade to 10.1. Then related to still keep staying on the lazy loading wagon, we also added a text filter to enable lazy loading of the images that you've embedded into the CK Editor. So if you want to leverage that, go to the text format configuration and enable this filter that is called lazy load images. And then when you enable that, all images will be lazy loaded by default unless you put the loading eager attribute on the image, then it won't be. And just as the previous one, it's been already committed into Drupal Core. It's coming out in 10.1. If you would like to use it before, go to the issue that is in the title and you can try using the patch. And then finally, lazy loading of iFrames. This is on the formatter for OMBED content. Exactly the same like all the previous. I won't repeat myself. And again, it was committed and it's coming out in 10.1. Also there's the issue if you want to patch it. So quite a lot of things and if you keep your Drupal up to date, you get all these new features for free and we encourage you to start using them. And now we started working on another initiative, on another project, which is automated performance testing. When we were discussing how we could help and how we could get the biggest impact, we realized that we have this great quality assurance system for Drupal that runs all sorts of tests for every patch. But on the other hand, performance is tested in a very limited way. It was basically always done manually by core committers, usually before releasing a new release. And issues which meant that issues were identified later when they are hard to fix. Like the example was a regression that got into 9.5 release. And since it's manual, it takes a lot of precious time. So if we already have this automated system for quality assurance, why not test performance as well? And we decided that that would make a huge impact, so we're doing it. It's led by core committer Catch, NetNil Catch poll. Did I pronounce that right? I think so. Catch. I always know him as Catch. The goal is to have framework for testing performance in Drupal's testing system. And then to have some initial set of tests that we would run on every commit. Initially, at least. The limitation here is the infrastructure cost. And we would be very excited to extend this to every patch. But currently with the current funding that we have, it's simply not possible. But we thought that testing on every commit, it will still give us pretty frequent feedback and early feedback about all the changes that are going into Drupal core. And we already identified problems. And the issue that is linked there is about Umami and how Umami doesn't deal with the top banner image in a really nice way. And I think that that was actually the top recommendation that was visible on Page Insights when Brandon showed it to you. This will run on Drupal's association infrastructure just like the existing test bot. And could be, in theory, expanded to Contriblator. The only limitation when the framework is done and the base classes for the tests are there, the only limitation is infra cost and for maintainers of contributed modules to provide the performance tests. So when this gets in, you can even use it in your own in-house pipelines for your projects to run these tests there. So we really think that it has an enormous impact and a lot of potential, which brings us to how we can work together to improve Drupal. Because only if we work together, Drupal will thrive, right? We need a lot of collaborators to have massive impact. If just like a small portion of the community contributes, we can't achieve as much as if everybody comes together. And there are many ways to contribute. We can improve tooling that's a more technical side like automated performance testing. By the way, if you want to get involved, there is a meta issue that then links to other issues that are working on specific improvements and specific tasks. One thing to help, one way to help is to improve Drupal-specific recommendations for Lighthouse, which then would show up on PageSpeed Insights. Like the recommendation that we've seen, there is a repository of recommendations for every platform. And it's a public GitHub repo. If you want to contribute that way, you can go and provide better recommendations to point people to write modules or maybe to write pages on Drupal documentation or something like that. We can build new contributed modules, maybe a module that displays your core web vitals metrics or something like that. So it can be installed something like SEO checklists, but here for core web vitals. And you can also help with promotions, webinar, blog posts, podcasts to rise awareness about this topic and about the projects that are going on. Again, if you are more into technical stuff, there are a bunch of issues in the core queue that would improve performance, like adding support for contemporary image formats like AVIF. WebP is already supported. We can use WebP in image styles, but the standard installation profile still doesn't use it. So maybe helping do that and improve the out-of-the-box performance of the standard Drupal this way. And there are other things like preloading images and minifying core JavaScript. So if you're interested in these topics, check out these issues. And if you have any questions or any ideas or comments, please get in touch. Our email inboxes are open for anything that could improve Drupal as a whole. Now we'll be happy to take some questions. Oh, yeah, sure. I'll keep it on and you can ask questions. The question was about the 9.5 regression. Yeah, I think so. I'd say with the HTTP archive, especially being able to observe the trends over time, I find to be pretty informative. Generally, we would hope and do see that performance is trending in an upward trajectory, but we can for sure see where there are inflection points. And so whenever there is a major release that does have something specific to performance, we will notice where the overall performance is going to have that. But then on the counter as well, it could be an opportunity to identify maybe something is a pretty noticeable regression. So this is a better time to check in on that. Sure. Do you have the URL for the HTTP? Yes, we do. The question was if we have the URL for HTTP archive, that itself is the URL, so just HTTP archive. And then more specifically, what we were looking at was the technology report. So I would recommend looking at the technology report and you can customize in there by the technology breakdown. And that's when we can analyze one CMS to another. It should be .org. So this is it. So HTTP archive.org. We have reports and then co-revital's technology report. And then in here. So this is what it looks like with other CMSs included. And then we can specify for Drupal. Yeah, that's the report. Sure. Yeah, and so the first was around frameworks. Is that correct? Tag manager. Yes. So for both of those, actually, I think that with Drupal contributing towards performance, it does make more noise around performance. And so when we are looking to create, you know, like a culture of performance and we have specific issues that Drupal as an organization is looking to improve, that can be a way that we can, you know, surface what's important for one of the largest platforms. And on those two specifically, I'd be happy to bring that back to some of our engineering teams to speak with them. So happy to talk about it. So what was, why did we decide to do automated testing on commits initially? These are just, we're currently focusing on Drupal core. And core doesn't have that many commits, actually. It has a lot of activity in the issue queue, which is what is majority of the load of the current testing framework. But there are like, there can be days of no commits at all, or it can be maybe 10 or 20 commits per day. That would be like, the biggest number of commits would happen on a Drupal conspirator, for example. That would be one example, but it's never, it's measured in 10s per day. So it's not that big of a number. And we thought that that's the best trade-off from the impact and the infrastructure needs that we would need to run that. That's why we decided to do that. We're not doing every commit to every contributed project at this point. We would love to, but that would require much more infrastructure. Yeah, the question was about base classes for performance tests. When I displayed, where's the loss? Okay, now it's here. So that issue there, which is the meta issue for the automated performance testing, links to another issue that is creating a test base class for performance tests. If you're familiar with tests in Drupal, you have base classes for unit tests, for kernel tests, for functional tests. This will be just another one, like those. So if you wanted to create your own performance test, you would extend those class and write a test. That's the idea. And that's still not in. The issue is still active. But if you're interested in that, go to the meta issue, and then the meta issue is not huge. But then there is another issue that is linked that is about adding this base class, which already has a patch and discussion there. So the report itself, you could break that out. So the one report that I showed was that you could read insights. That would take a site in isolation. So if that site is using a CDN or is using a specific hosting platform, all of that will be inclusive in the insights that we're seeing. But then for HTTP archive, there is a way that you can break that out by the different CDN providers. So you could see what is one relative to the other there. So there is a way, yes. Yes, and it's part of an extended team as well. So from Google, we're from the web platform side, so the Chromium open source project. And we work with a number of organizations within the Drupal space as well, and Tag1 being one of those organizations. And then from Tag1, it is part of a larger extended team as well. Any other questions? Well, wonderful. Thank you.