 So improving your site, making it faster will actually have an impact on your bottom line. Having a fast site means that Google is going to send more traffic to your site, and making sure that it's fast means that they'll probably stay longer. So there's a direct relationship between how slow a site is and the likelihood for people to just leave. So at the end of the day, whatever it is that you're trying to use your site for order to have a positive impact on your bottom line, making your site faster is going to make it more effective in doing that. So there are a variety of potential factors in terms of things that can make your site slow or fast, and we'll talk about all of those in turn. But let's dive into this surgical approach that we're here to talk about. So we're going to start by running some tests to gather data. We're going to use that to formulate a diagnosis, figure out the things that we need to actually go in and change. We're going to make those changes and then make sure that we don't relapse over time and end up back where we started. So let's begin with the testing. Really, we have three goals in terms of what we're trying to do. So to start, we want to understand what can or can't be changed within our site. We want to figure out which pages within our site are the ones that we really need to focus on. And then within those pages, what are the specific elements that we need to go in and make better? So the first part of that is really having the conversation. Sometimes it's with an external client. Maybe it's a business owner within your own company around what really are the requirements and what things end up in the site just because they're sort of window dressing or nice to have or the new hotness. Really, as we talked about before, things that may make the site slightly sexier but actually degrade the performance are actually costing the business money. So a lot of times clients want a carousel. If you're having that conversation with the client and can't get them to budge, you might show them this site. Should I use a carousel? If you haven't visited, I'll give you a spoiler and the answer is no. There's lots of great UX data in here that really shows that they're really not affected and just add to the bloat of the site. And here's a site that's been attributed to DaVinci. It's also something that Steve Jobs took as sort of a core philosophy. And I think it's really important when we're having those conversations around not only the content but the features of the site. Really try and keep it pared down to those things that significantly add to the user experience and help them with the things that they need or that as a business you need for them to be able to do on the website. And everything else should really be stripped away and keep the site as simple as possible. Moving on to that question of identifying which pages need attention. Certainly for a site that's already in production, analytics like Google Analytics can give you a lot of that data. So here we've set it to show us that the slowest page is on the site and we've got a secondary metric of the actual page views. So that can be really useful because as an example, the home page is only the third slowest page, but it's definitely the one that gets the most page views and so that's naturally where we would want to start our investigation. For a site that's still in development, any crawling tool and in this example we're using Screaming Frog can give you similar data. You just basically use that response rate and sort that so that it's showing you the slowest pages first. And then once we figured out the pages that we want to dive into, we need to go in and figure out what are the specific elements and start looking at in detail all of the elements and assets that are being loaded by that page. And for our purposes, we're going to break that into three different segments. So we've got the page load, the page itself, all of the on-site assets, so the CSS, JS and images being loaded for that page. And then the third party assets or in some cases they could be tracking pixels or off-site services that get called as part of that page load. The tool that I like to use is called web page test. It's very feature rich and absolutely free. And when you run a test against a URL, it'll give you these nice letter grays at the top that give you sort of a quick sense of how well it thinks your site is doing. It gives you some very detailed metrics, not only the full page load, but also some steps within that and a couple that are really sort of about the user experience, more the user perception of speed. So things like first paint is when they first go to your URL, how long is it before they start to see things drawn on the screen. In addition to that data, it gives you this really nice waterfall view of the page call itself. But then all of those different assets that get loaded and some of the different weights in terms of server connections or first byte times as those different assets load. And as you can see, this specific example has literally hundreds of different assets that are being loaded for that single web page load. It also gives you this nice connection to you. And I like this one because what it does is it basically for every connection that the browser makes to an individual server, it sort of aggregates in line all of the different assets that are loaded. And so it gives you a sense of what things are being loaded from which domains in a more compact way, which can be really useful. And then finally, if you need all of that really detailed data on those individual calls, then you can get that too. So coming back to the way that we wanted to categorize those different calls within our page load, certainly the first line is our page load. Then we can highlight all of those onsite assets and then everything that's left is our third party. And this view of things is kind of useful to get a sense of the breadth. For this kind of analysis, actually the connection view is really nice because at a glance you sort of can quickly get that sense of which areas really have potential issues that we want to address. So for example, over two seconds for page load is much too high. Ideally you'd want to be more around a half a second. We can see that the onsite assets are contributing almost eight seconds, which again is far too high. But really actually the biggest issue is probably these third party assets. And one thing that we see pretty commonly is that if you look at any of these, the actual load of the asset itself, that narrow little sliver is basically insignificant. But the time that it takes the browser to negotiate the SSL connection and wait for that asset to be returned is actually significantly higher than the actual download itself. So that's why a lot of those things, even though if you're looking at it from a data standpoint, it may only be one or two K to add a tracking pixel or a couple of K for some JS library. The fact of using all of those different connections aggregates up to really slow down the performance of your site. So now that we've got some data, let's start to analyze it and figure out what are the things that we need to change. So for page load, the first byte is often the key metric and as we talked about, we know where to find it on our chart. If we need to do a deeper dive and really understand within that page load, where we can start to optimize, there are some tools like SIDOT, which is in D7, it's address command in D8, it's a module, but it'll spit out this nice report that really gives you a lot of recommendations around how you can better configure your site. So things like best practices, it'll talk about how your blocks are set up, caching settings, database health, which modules you're using, and so on. So lots of great information there, and that's definitely a good starting point. Commercial tool like New Relic is great because it has this historical view. So if you notice that your site is being really slow, you can go into here and see how long that speak issue has been going on. You can compare this week to last week to get a sense of is it something very recent, and then potentially it might be something that correlates, let's say, to a recent code deployment. So you can also use it to do things like look at which modules are actually contributing to the load time of the site. So in this case, we've got a single custom module that looks like it's probably the single biggest contributor to the site load times. And so that's definitely a place where we want to start to dig in and do more analysis. And even on a particular function call, we can see which functions are contributing most of the load times. So another tool that's great for doing that kind of, I'll say, code level analysis is Blackfire, again, a commercial tool. It'll do some of the same analysis in terms of telling you which functions are contributing to load times. But the thing that's really nice about this one is it gives you this tree level view, so even within your function, it'll help you to sort of visually map out what's making that actual function call take so long. So let's talk about what we can do to improve the page load. If it's the actual loading of the page itself, which doesn't tend to be the issue, but if that's the case, if you're, let's say, loading a lot of data from an external data source, try caching that locally, and that can help. Or if you're loading a lot of data, but only displaying a portion of it, making sure that you're actually pulling the data in a way that's prefilled, it can help. If it's just a lot of, let's say, page content, certainly splitting that up can help, also making sure that your markup is as clean as possible. And if you've got a structure, say, where you've got a ton of content, but some of it is hidden, let's say, within details elements or different coordinates, then consider if it's possible to load that some of Ajax through Ajax as the user interacts with it. But more typically, what you're going to run into is first byte times, where it's really being slowed down by the complexity of what the server has to sort of calculate in order to provide that page to the user. So you can try and do some things at the server level, so you can get it more CPU or RAM, which will get expensive. You can make sure you're using latest versions of software, for example, PHP, and having a reverse proxy like Varnish or some server side caches like APCU or Redis or Memcache will definitely help. Within your Drupal configuration, making sure that you're leveraging caching as much as possible will definitely be very effective. And then the other thing is, if it's, let's say, a view that's being really slow to render, you can go into your view settings and have it show the SQL query that it's actually generating. And then you can either just manually sort of analyze that or you can take that and put that into your like MySQL client with explain statement in front of it, and that will give you some data around, you know, could the actual database potentially be structured better. And it was going through an exercise like that that made me realize that in core dates and the date ranges are actually stored as strings, which creates some pretty massive performance issues if you're using those for views. So I made this module called Smart Date that stores them as timestamps. So if you run into that, there's an option for you. Other modules that you might want to use, definitely BigPipe, is installed by default now in core, and that can really help the user experience in the sense that the page doesn't wait for all of the slow elements to render. It'll start, it'll put in placeholders for the slower elements and then render the rest of the page and then dynamically replace those as they become rendered. If it's a site that recently migrated, then Fast 404 saves the server from having to bootstrap Drupal for every 404. And using syslog instead of database log will help as just a more efficient way to store any kind of error messages. To the extent that the purge module allows you to force cached content to expire automatically, it allows you to set your cache times much longer, and that can help with performance as well. And in terms of modules to uninstall, really let's go back to that idea of simplicity. I mean, anything that's only there for in a window dressing, let's make sure we strip those out and not force the server to work through more complexity than it needs to. Certainly need development modules, so Devel or Browser Sync or Kint or any of those shouldn't be enabled in production, really ideally not even in the code base. UI modules, typically in a production site, don't need to be enabled. And the statistics module is notorious as really bulking up the database, and usually you can get that same information out of the Google Analytics anyway. In a similar way, the search module that's in core puts a lot of extra stress on Drupal in terms of having to index that content and provide results. So if you're using a host like Acquia or Pantheon that has solar available as an external service, that's a much more efficient way to serve up your search. And PHP Filter, in addition to being sort of a giant security hole, degrades performance because any content that's using PHP Filter can't be cached, so definitely not something that should ever be used in a production site. So next let's talk about those onsite assets. In terms of a metric, the request invites in sort of reflect that, although it tends to be a mixture of that in the third party assets. But visually, you can sort of look at that connection chart and quickly get a sense of how much of an impact that's having on your load time. And I'd say this is pretty typical in the sense of images being a major contributor. So if you need to do a deeper dive on those onsite assets because you see that that's where you have an issue, a couple of tools that you can use to get even more data would be the Lighthouse audit that's built into Chrome. As you can see, it'll give you insights not only on performance but on some other areas as well. And it gives you some of the same metrics but maybe in a slightly nicer presentation. As well as some specific recommendations in terms of not only how it thinks you can improve your site but even some nice estimations of how much of an impact it thinks it will give you. And if you open one of those up, it'll actually even give you on sort of an asset by asset basis how much of an impact it thinks you'll get by optimizing those. PageSpeed Insights, another Google product. So in terms of the information it's going to give you is pretty similar. The main difference here is that it's as being read from the Google servers so to the extent that you might be optimizing performance let's say for SEO purposes, it's definitely useful to get a sense of how Google perceives the speed of your site. But as you can see, a lot of the same metrics and recommendations are basically the same information. In terms of optimizing the delivery of those assets, if you're able to use a CDN, that's definitely going to make a major impact. Sites like Cloudflare have some inexpensive plans to get started and there's a CDN module you can use. For your CSS and JS, again, only use what's necessary. If you're using something like Bootstrap, it's really intended to be a starter kit where you strip away what you don't need. But a lot of people dump in the whole thing and then start on top of it. So again, don't try to use everything that's possible. Don't throw in a JavaScript widget to solve every little code problem that you run into. Really try and be judicious about what you're using. The Advanced Aggregation module can be really powerful in terms of helping you to aggregate, compress, and minify sort of your CSS and JS. It'll do things like broadly compression and so on that you won't get out of Drupal core. And it can also help by doing things like moving render blocking elements to the end of the page so that the page starts to render more quickly and gets the user that perception of speed. In terms of images, definitely you want to use image styles to make sure you sort of appropriately size images, actually like resizing them as opposed to serving giant images and then making them smaller only through CSS. Responsive images, we'll talk about more in a second, but really powerful in terms of giving each type of client an appropriately sized image in a way that can be really sophisticated. The Image Optimize module can be much more aggressive in terms of not only stripping out metadata and unnecessary information out of your images but actually more aggressively compressing them. And Image Lazy Loader is a really easy way to sort of use lazy loading on your site so that only the images that are within the user's viewport are actually loading and all of the other ones can dynamically load as they scroll down the page. So let's talk a little bit more about responsive images. Here we can see demonstrated one of the more sophisticated capabilities called Art Direction, where you can actually have different sort of aspect ratios and cropping that are optimized for each type of client. But the thing to keep in mind is that that can get complex really fast. So this might be a typical set of ranges we use for our CSS in terms of how we're going to optimize our layer. But the thing to keep in mind is that even though a single media query would be able to target all of these devices for the sake of working with responsive images, we actually have to pay attention to their actual pixel sizes and then use the different multipliers that correspond between the physical resolution and what presents as sort of the device-independent pixels. So it can get complex and definitely there's a learning curve to working with responsive images. You may find that it's easier to keep the aspect ratio to start the same across all different sizes so that you don't have to have as many different use cases to worry about. But we'll just quickly talk about the implementation. If you were in the theming workshop yesterday, you would have gotten hands-on with some of this. You basically find your breakpoints in your breakpoints.yaml within your theme. And then you set up a responsive image profile, basically. And for all of those different breakpoints and multipliers, you can use different image styles. So you have to set up different image styles. But depending on how complex you go with this, you might end up having to have as few as three or four, but as many as 12 or 15 different image styles. Again, depending on how complicated you choose to make this. And then this is what the output looks like within the HTML code. So you've got a picture element that contains a variety of different source sets with the different media queries to let the browser know when it should use which one. And then for browsers that don't actually support the picture element, you've got sort of a fall-back standard image. So let's move on to the third-party assets. And again, really defined by that giant area and all of the different calls within it. Again, we want to try and use as few as possible. Plants tend to like to use every tracking pixel under the sun, but hopefully if they can understand the impact on performance, they can sort of rationalize that a little bit. One thing you can do that will help is either you're using a third-party asset to cash a local copy of that and serve it off of your own server so that they don't have to make that connection to an external server to be able to use that. And within advanced aggregation, there's this relocater submodule that can automate that and do some of the heavy lifting for you. Definitely to the extent that you can use an aggregator like add to any, that will save at least the number of connections in the sense of one single connection to retrieve all of your different social sharing buttons instead of having to individually go out to Facebook and Twitter and LinkedIn and so on. And there are modules for a variety of those different services. So now we've figured out all of the different changes that we want to make. What are some of the things that we should consider as we get ready to go in and implement them? So the main thing is to not do everything all at once. Some of the things that you want to change will work out the way that you thought and make your site faster. Some may kind of work, but not as much as you had hoped and you may need to go in and tweak. And other things may not work or maybe even work to your detriment. And the problem is that if you do all of those at once and deploy them, you won't really be able to tell which or which. And so it's much better if you can to do sort of small incremental rollouts, test and measure between so that you understand did that have the impact that we wanted? Should we leave it and build on it or should we roll it back and potentially refine our approach and try and redo that at a later time? And ideally what you want to do is to start with what you think will be the quick wins. And a good way to do that is to do this kind of impact matrix, take all the changes that you want to make and plot on there how much effort you think it's going to take to implement that against how much of an impact you think it'll have on the performance. And then you really want to focus on that magic quadrant of things that you think are going to be high impact and low effort and start there. So finally, let's talk about what are the things that we should be paying attention to? How can we prevent ourselves from ending up back in the same place of having these performance issues? So certainly whatever dashboards or reporting you're doing about the site in general to sort of keep an ongoing picture of its health should include performance metrics, so average load time and what have you. You can leverage CI integration so that as you're pushing code to the site, let's say even to your development environment, it'll automatically run performance tasks and throw warnings if there's some kind of performance issue that's being introduced by your code. And then there are modules like monitoring or performance monitor that can help in terms of, again, giving you access to that performance data directly within your Drupal and MinUI. One thing that can be really powerful, especially again in talking with either your customer or let's say your internal business owner is this idea of a performance budget, which is kind of like a diet plan for your website. So it's saying for each type of asset, let's set a target in terms of how much we're gonna use per page and then we can use that to sort of track where particular pages are slow, are they adhering to that plan, are they over? But I think that the most powerful thing about the idea is that it really underscores this idea that if they're gonna keep adding new things to the site they need to either take away something equivalent or they're going to be degrading performance. So really helps to instill that idea as we're having a conversation around adding things to the site is, are we gonna give something up or are we gonna accept that it's gonna have a negative impact on performance. And the one limitation at least of the tools that we're gonna talk about is that they're really about data. And as we talked about before, especially with third party assets, sometimes it's really that connection bloat that's slowing down the site and that won't really get measured to the same extent. But here's a great little browser-based tool called the Performance Budget Calculator. So you say how fast you want the site to load and you can sort of target a typical connection speed. And then it will say how much data that translates into in terms of a page size and you can use these nice sliders to say that's how much we wanna allocate to the different types of assets. So just sort of a quick and easy visual way you could even do that in the middle of a meeting as a team to sort of try and drive some consensus around that. And this is a great browser extension called Browser Calories. Once you install that, any page you go to, if you open it up, it gives you this nice format, kind of looks like the nutritional information on your site of your cereal box. By default, it'll actually compare those different categories against the top 100 websites, but if you have an actually defined performance budget, then it will give that specifically for against the targets that you've set. But that's a really nice way as you're going through different pages or starting to analyze pages that you've identified as having performance issues to give you a nice quick look at some of that different data. So that's actually the content that I brought. So I'll open it up now for questions or comments. How about the musician for a single page applications? I mean, I think the same basic rules apply. I'd say, I guess that the challenge there is probably more on the content side. If you've got a lot of content on a single page, obviously that's going to sort of work against you to some degree, but typically, you know, if we're talking more about like text type content, it's not, that doesn't tend to be like performance-wise as much of an issue, but as long as you're sort of lazy loading and so on, it's probably going to be okay. Yeah. Yeah. That combined with maybe even captioning art and shoulders. Yeah, for sure. You know the questions? Yeah. Yeah. You talked about like data-based looks of it. Have you had any experience with like revisions in that specifically with workflows and to play high-five where these tables are occupied like a third of my database or more? Yeah. Yeah, that's definitely can be an issue. We ran into an issue where we were doing scheduled nightly imports from like a product information database and we ran into exactly that issue. So certainly if there are, if you can be selective in terms of only needing the content moderation on specific content types, that can definitely help. But yeah, that's definitely a thing to be aware of. Do you know like, I know there's like, sorry, there's like modules that help you import revisions. Have you had any experience with like, writing any plugins or something like that which encourages things based on some? Yeah, so we tried using some of those modules and never really found any that seemed to sort of reliably do what we needed to. So I ended up doing kind of something more custom. But yeah, hopefully, it's a common use. I'm sure over time there'll be some kind of a stable path for doing that. So, sorry, you had a question? I was just going to point out that those modules exist for the jump in the illusion. Right. Any other questions? Thanks for that. Excellent.