 Did someone in the back close those doors? That would be much appreciated. Awesome. And I think I'm pretty loud at this point. But if anyone in the back can't hear me, wave for your hands, do whatever you need to do. Yeah, so thanks for sticking it out through the end of the day. I know it's been a long day. This is kind of a late session, so I appreciate your attendance. And here we are for everybody loves performance, easy audits, and low-hanging fruit. So I'm going to hit you right off with the same statistic that's in the session description. And that is kind of one of the most well-cited web performance statistics that's out there. Amazon sees a 1% decrease in revenue for every 100 millisecond increase in their load time. So for every 100 milliseconds, they're slower. That's 1% of revenue they're losing. What isn't always mentioned when people cite that statistic is that it's actually from 2008. So it's basically almost 10 years old. That said, not much has changed with regards to web performance in the last 10 years. BBC just last year cited that they lose 10% of users for every additional second of page load time. BBC is kind of a different organization. Obviously, they're not selling products in the same way that Amazon does. But time and time again, organizations that are running these studies are finding very similar results to this. And so kind of with all of these numbers, obviously, this isn't always the case. But what we're starting to see is that 1 second is about 10% of your engagement. And quite often, your engagement is directly tied to your revenue. So 1 second slower on your page, 10 second of revenue, or 10% of your revenue could go just like that. Google's even got some numbers that are a little harsher. This one they like to say, so 53% of users abandon sites that take more than three seconds to load. So while you might be losing 10% every second, once you hit that three second mark, you might lose as much as half your user base. And of course, they're gonna continue dropping off from there the longer it takes. And for that reason, Google has actually stated that their page rank algorithm is gonna start punishing really slow websites starting in July of 2018. So we're only a few months away from that. So at this point, we've kind of got to speed up or we're gonna lose the race. And really the reason that it's so important and the reason it's in this track, the user experience track, not necessarily a track that performance might normally fall under is because performance is user experience. Google is punishing these, punishing slow sites because their users are gonna get a worse experience. They're in the business of giving people a good experience. We all are, hopefully. And so that is why performance is so important. So with that, we're gonna talk today about what was promised in the title and the description, two things. Easy performance audits. And then we're gonna talk about, once we've done those audits, how we're going to improve our performance. So I'm Gus Childs. I'm a technical project manager and front end developer. Gus Childs is my handle basically everywhere, triple.org, Twitter, GitHub. So if you end up with any questions at the conference or later and don't get a chance to track me down or ask them, you can find me there. And I work for Chromatic. So Chromatic is a fully distributed design and development and DevOps and support agency. You can find us on the web at chromatichq.com. I've got the Twitter handle in the bottom right of every slide. So feel free to reach out to us at the same time with any questions. So yeah, let's get right into it. So conducting easy performance audits. I like to break down my audits into five different pieces and we're gonna go through each one of these individually in greater detail. And those pieces are metrics, tools, benchmarks, goals and results. So first up, of course, out of those are the metrics. What should we even be measuring when we're conducting our performance audits? And again, I know I'm going kind of fast the slides will be available online. I think they're pretty helpful on their own. So I apologize if I go a little faster than notes can be taken. So again, first part of our audit is our metrics. What are we gonna measure? So we wanna measure these five or at least these are the five I like to measure if I'm pairing it down. So we've got our speed index, our time to first byte, our start render, our load time and then our fully loaded. And once again, let's drill down into each of these one by one. So first up would be speed index and this image right here kind of explains it best and that's what's called a film strip. And so what that does is it's loading the site, the DrupalCon Nashville site on a slow connection, really only about five megabits per second. So maybe equivalent to what we're experiencing here at the conference. But it's a real world example of testing on a slow connection. So this is showing what the loaded site looks like after every half of a second. So you can see that users are seeing nothing for about a couple seconds and then they start seeing something and they're thinking, okay, I'm actually getting somewhere. This is loading. And then after three and a half seconds, most of their mobile viewport is filled with the content that they're gonna get. And so that's what the speed index measures. That three and a half second mark, when is their viewport fully visible? And this kind of goes along with something, a term that's kind of come out, I think in the last few years, called perceived performance. And perceived performance is important because users don't care what your actual load times are. The metrics we're gonna end up talking about in just a bit, they don't care about those. They only care about having a good experience. And so for that reason, we kind of, or I kind of at least like to focus on speed index when I'm doing my audits. That's kind of the one number I look at the most. That said, all of the older metrics they still matter, all the rest of the four. So going through those, the time to first byte is kind of, it's a measurement of between when you hit enter on your address bar or tap go on your phone between there and when you get the first byte back from the server. Then you've got start render. So once you're getting bytes back, when is there actually pixels that are starting to get painted in the browser? The first non-white content being painted in the browser. Then you've got load time. And so that's when the browser actually fires its window load event to say, hey, this page and the content that's on it, that's loaded, so anything that's waiting for that can act otherwise the user should be good to go. But then there's fully loaded. And so that starts measuring after the load time and it figures out when there's been two seconds of no network activity, including everything triggered by JavaScript. And that's kind of the key word here, including everything triggered by JavaScript because as we know, most sites have plenty of JavaScript being triggered. And I'm not necessarily talking about React and those kinds of things, but I'm talking about third-party requests. And so that's normally what ends up showing up and fully loaded. We'll talk more about that as well. So just quick recap, there's the metrics, our speed index, and then our kind of four more traditional older metrics. So now that we know what we wanna measure, we gotta figure out how we're gonna measure that. So what tools should we use to grab those numbers? The three tools I'm gonna talk about today are web page test, page speed insights, and Lighthouse. They're all free and fairly easy to use available online. So before I get into them though, I'd like to say kind of be aware of hardware and network specs for consistent results because if you're testing a site, maybe you're just looking at DevTools or testing a site on your local machine and saying, oh, that loads pretty fast on my machine. We all know how that kind of works on my machine. The saying is so dangerous. If you're just using your fast network or your fast computer and you're not throttling them, you're gonna get first inconsistent results. I can't run the same test as the stakeholders that I'm telling them their site is fast. They might experience different stuff on their hardware and obviously our users are gonna have a different experience on their hardware. So we want all of the hardware and the network specs to be consistent regardless of who's running them, when, where, why. So first up, we've got web page tests. You can find that on webpagetest.org. It is free and it does have a consistent hardware and network because it's going to be running on their own remote servers every time and those servers and their specs are consistent. So this is what it looks like when you go there and this is one of the kind of oldest tried and true sites for all this, but it's super helpful. So fairly straightforward form. There's a lot you can do with it, kind of in the advanced settings. You don't even have to really get into all that. All you have to do to get started is plug in your URL and hit start test. You can do interesting things like change what server, it's testing on the browser, the connection, which by default is that five megabits per second, so fairly slow, 3G-ish almost, but that's super useful so I leave that and then hit start test. It does its thing, it runs the tests on that server and then it comes back with the results. And so the most helpful part of those results for me are the what's under the header performance results right there in the middle. So we can see right there the metrics we were just talking about. We've got speed index, first byte, start render, your document complete load time, and then your fully loaded time. And also you can see in the top right it kind of gives you some grades for various aspects, how it thinks you're doing on those pieces. So it works by running what's called a synthetic test, kind of a, you know, it's not a real user, they're running your tests and it runs three of those and it presents you also, this is kind of underneath what we were just saying, it presents you with the results from those three tests. You can click into the waterfall, you can go super in depth, there's a lot you can dig into. You can click those grades I was showing in the top right and so if it says you're not doing that great of a job at leveraging static assets or the caching of those assets or compressing your images, it'll tell you specifically what files it thinks you could be doing a better job with that on. So kind of a little bit in summary, web page test is good to kind of obtain those official metrics, the measurements we were just talking about, the ones that you're probably gonna be sharing with your stakeholders or maybe you're the stakeholder and you know, want to measure those yourself. Web page test is really good for getting those metrics, but also you know, you can find your potential improvements, you can dig into the waterfalls and things like that. And then also it's kind of a little bit more advanced but if you're a developer or have developers at hand, you can actually automate repeated tests on what's called a private instance and that can be super powerful and helpful as well. So next up we've got page speed insights, it's a longer URL but first result on Google when you Google it and it is also free and also has consistent hardware and network. And so what it does, it kind of once again runs a synthetic tests on your site and it does it both in mobile and in desktop. And so it'll give you the information for those separately. So right now we're on our mobile tab, we've got our desktop tab and what's really helpful for with page speed insights is what's here, what I've kind of zoomed in on. And what it does, it'll give you a one to 100 score on how well it thinks you're doing with optimizing the performance of your page. So on mobile it says 62%, so we're doing pretty good but there's a decent amount of room for improvement. I think desktop was at 47, so even more room for improvement there, it thinks. And kind of like webpage tests, you can scroll down and it'll start to tell you exactly what it dinged you on. You can click into those and see lists of specific files and things you can improve. So yeah, again, super helpful. I like to use page speed insights to get that one to 100 score. You know, maybe as a stakeholder or for your stakeholders, you don't want to digest and learn about all the different metrics and exactly what they mean and worry about them and even have an understanding of what numbers are even good. Whereas here you can say, oh, how well does Google think I'm doing one to 100? You know, if we're at 62 or whatever it was at and we improved to 80, we know Google thinks our site is faster. If that's influencing search results, that's probably gonna be a good thing for us. But then also finding those potential improvements again because you can click into those lists that it tells you why, you know, where the remaining 38% is. And then next up would be Lighthouse. So Lighthouse, if you have Google Chrome, you've already got Lighthouse. It is in the DevTools of your browser, but then it is also open source so you can grab it on GitHub and use it in different ways as well. Also free, but it's gonna be inconsistent hardware and network. You can throttle the network. You can say, I only want it to run this fast, kind of a simulated, slowed connection, but you can't really do the same thing with hardware. You can say like, I want it to run twice or like a two times reduction in my processor four times, but you can't necessarily throttle it so that if I ran it on my machine and everyone in here ran it on theirs, they'd get the same exact numbers. So we use it, you know, for kind of different reasons in that regard. This is what it looks like if you do open up DevTools in the bottom of your browser. There's a small, I suppose, blue button at the bottom. It says perform and audit. You click it. It does its thing. Once again, it's simulating a page loading in mobile. I guess it's kind of a side note. You know, the reason all of these kind of focus more on mobile is because if you catch all of the problems that mobile is having, you're probably gonna solve for the other screen sizes as well. So you kind of want to default to mobile. I actually think there's a session on that tomorrow that looks really good. I would recommend it. I think it's all performance is mobile performance, but I could be wrong. So it does its thing and then it actually also gives you one through 100 scores not only on performance, but on other stuff like accessibility, best practices, SEO. Again, I don't use these as my one through 100 scores because of the inconsistency, but it is helpful. You scroll down, you get more film strips, more of that speed index perceived performance thing. And you also start to see some of the metrics that we're seeing in speed index. They've actually got some beta stuff in there for newer performance, perceived performance metrics first interactive and things like that. And then again, scroll down. It tells you why it dinged you on your scores, lists of files that you can improve on, all that kind of stuff. If you want to take it one step further, if you do want to introduce that consistency or use this in more powerful ways, you can clone it on GitHub. There's a ton of really interesting and exciting things you can do with it that kind of warrant their own session on their own. So I'm not going to go too deep into that, but it's definitely worth checking out if you're kind of comfortable with Node and that kind of thing and are interested in this. So yeah, Lighthouse, to find those potential improvements, we don't focus a ton on the score and the metric from there. We're just really trying to dig into more than nitty gritty. And then again, if you're a little more on the advanced side with the development, you could automate repeated tests. You could run bulk tests. You can get JSON back and do all sorts of things with it. And so what's kind of funny to me is that all three of those tools are operated by, if not were originally written by Google. So it's kind of silly to me that I end up using them all. Like why do I need three tools made by Google? Why don't they all tell me the same thing? But they don't. They kind of say slightly different things. They're useful in different ways. So I like to use those, just all of them, just to make sure I'm getting the bigger picture of everything. One tool I will say I did not list originally, but I will talk about, is speedcurve at speedcurve.com. It is not free, but it can either run synthetic simulated again tests or real user monitoring. So it'll literally sit on your page and pay attention to all the network requests and everything that your real users are getting. And then it'll kind of spit it out into this really clean and pretty and fun to look at UI. It'll map your performance over time across devices. You can plug in your competition and keep your eye on your competition. It's got all sorts of bar charts and everything. So, and also waterfalls. So a ton of cool stuff in there. Again, it's not free. So I'm not gonna go into a ton of detail. But if you're looking to continuously monitor your site across devices over time against the competition and you want a really polished UI to see all that stuff in, I would definitely recommend looking into speedcurve. So those are our tools that we're gonna use to measure our metrics and figure out potential improvements. Again, web page tests, page speed insights, lighthouse and speed curve. So then, okay, we know what we're gonna measure. We know how we're gonna measure them. But how do we even know what numbers are good once we start running our site through? And we're getting three seconds for the speed index. Is that even good? Could we do better? So we've gotta kind of establish some benchmarks here. And the way I like to do that is I like to identify the competition. I like to figure out the important pages I want to pay attention to. And then I'm gonna measure those pages on my competition. So when I say competition, I'm actually, this is what I like to use. So we've got our current production site. So it could be your current production site that you're trying to improve, or maybe you're going through a redesign and a rebuild and you wanna be totally sure that you're not gonna launch a site that's slower than what you've already got. And then you also wanna make sure that you're gonna do better than your competition, right? So I like to gather three direct competitors that are in a very similar space to what you do. And then three indirect competitors that are absolutely killing it with regards to performance. So as an example there, one of our recent clients, they sold sunglasses and they were going through kind of a major software upgrade, a redesign, all this work. And again, we wanted to make sure their site was gonna end up fast, not only faster, but way faster because it was pretty slow at the time that they brought us on. So we looked at their site. We had them identify three brands that they're kind of always keeping their eye on, comparing themselves to competing for actual shelf space and retail stores. We got those brands from them. And then we measured about 30 different brands in a relatively similar space. And then we found out which were the most performant. We ranked all the speed indexes and figured out which sites we were gonna aspire to match or even beat. So in our case, this ended up being a watch manufacturer, an outdoor gear kind of retailer and like a handbag fashion accessory retailer. So these seven properties are what we are gonna measure to figure out where we needed to be. And then we're gonna identify our important page types. We can't measure, I mean we could, but it's not really worth it to measure every single page on the site. So which one are we gonna focus on? We can't really just measure the homepage if other pages are critically important to the user experience. So again, e-commerce, right? We're selling sunglasses. So these are the pages that are important to us. We've got our homepage, the product listing page that lists a bunch of products. The shop all page was really important to them. It's kind of their PLP, but it lists all of the products they sell, which apparently a lot of people use that page for them. The product detail page, so the individual product with the most important part of the site added a cart button. And then a collection page, which was just kind of like a content-heavy page speaking to a specific kind of collection of sunglasses that they had. So then we started to measure these manually. At first, at least, we went to web page tests and started plugging them in one by one. And we got the results, and these are still DrupalCon's results, but we got the results, and we put them in this hard to see or understand or digest chart and started looking at those results to kind of boil those down into an easier graph here. We're looking at speed index, and I've taken it from the five pages to three. So we're looking at our homepage, PLP and PDP. So the blue bar is where we're at right now. Before the rebuild, launch and everything, that's what their site was already at. And then the green is the average of their direct competition. So you can see they're actually already better than their direct competition. And if they hadn't looked at the next one, the yellow one, the indirect competition, they would have been totally satisfied and not really taken much effort to improve where they were currently at. But you can see by the indirect competition, there's still a ton of room for improvement. And so again, speed index, the line going across is the average, it's at about 4,400. So on a slower connection, all of these pages are taking about four and a half seconds before they're fully visible on the viewport. Time to first bite, similar situation. Sometimes we're already faster than the competition, sometimes we're not. In general, yellow is fairly low and we're averaging between three and four tenths of a second just to get that first bite back. Start render, we're hovering closer to two seconds. Again, I mean, the specific numbers here aren't super important because you'll start to understand what numbers make sense for you. But just kind of the general patterns you're seeing among these. So yeah, start render about two. We know we've got room for improvement. And then same goes for load time, which again on a slower connection is upwards of nine seconds. You can see the PDP was taking maybe 11 seconds to load on a slower connection on their current site. And then fully loaded. So the JavaScript was doing its thing and it might be 15 seconds or more before all of that's out of the way and the user can just do whatever they need to do without being obstructed. So we've got numbers now, we kind of understand what are good numbers, where do we wanna be? And so now it's just a process to pin it down to very specific goals. How fast is fast enough? You can keep tweaking performance forever, but when are we at least gonna be happy with our efforts and be proud of what we're launching and feel like the client is going to have success with this improvement? So kind of the general rule of thumb that a lot of people like to say is to beat the competition by 20%. Again, if we would've, we might've already been beating our current competition by 20% or the direct competition. So while I keep that in mind, I would just like to say, look at the numbers you've got and just be ambitious. It's not gonna be the end of the world if you don't hit every single goal. So you might as well shoot for the moon, whatever that saying is. So we took all the numbers that we were seeing before, we kind of ran some stats and just to figure out, what's the best speed index we were seeing on the homepage? What's the best time to first bite across all the sites? And then how can we create our goals from there? So more charts, same chart, but we've added the orange and that's our goals. And this time the line going across is actually the lowest value that we saw across all of the pages for this metric. So our minimum speed index here was about 1600 seconds and our goals were about 1600 milliseconds. Our goals were about 1500 milliseconds on the homepage and then like 2250 I think on the other pages. And you can see here, we're being ambitious, right? Our direct competition that we thought was killing it, we're saying, well, we'd love to be even better than that. Then we'd really feel good about what we had going on. Same thing goes for time to first bite. We said, okay, 0.2 might be better than what most of these sites are rendering on average, but that's where we'd like to be. About one second for start render instead of two, two and a half seconds. Again, better than most of the other metrics we measured. And then load time, we want the browser to think it's loaded within around three, four seconds. And then fully loaded, we want the JavaScript to be out of the user's way within about five or six seconds. You can see this is pretty ambitious compared to the other results, maybe a tiny bit too ambitious that came to bite us a little bit. But again, there's no harm in creating these ambitious goals in the first place. So we've got our goals, we know what we're measuring and all of that. So now it's just a case of figuring out, okay, how are we doing with those goals? So of course, if you're kind of intensely focusing on performance, you're probably running tests all the time. But what we like to do is track, kind of run an official round of tests every week and then communicate those results with the rest of the team. So again, this was totally manual for quite a while. And we would just dump the numbers into yet another spreadsheet. And the colors are very faint, it looks like at least from up here. But what we would do is highlight cells where we were actually slower than our current production site. We'd highlight those in red to say, we can't launch with this. Yellow was better than what we already had, but still not reaching our goals. And green was where we started to get our goals met. Green was actually, okay, we met our goal or exceeded it. And you can see it starts off fairly red, a little yellow. And as it goes more to the right, the red starts to go away, maybe more yellow and then maybe more green. And what's really helpful about this, especially as you're kind of in the later phase of rebuilding a site or redesigning it and implementing all that stuff, is when stakeholders come to you and they say, okay, we want a modal window that pops up immediately when the user visits the page and we wanna ask them to sign up for our newsletter. We want a big image carousel at the top of the page. Instead of one image, it's gonna load five or six and there's gonna be a video and it's gonna autoplay and it's gonna have a ton of JavaScript powering it. Or we need like every third party script under the sun for marketing and analytics and user tracking and all that. When these kind of requests come through, I mean, they're all fair requests, but what you can do is judge them by the impact that they're gonna have on performance. So you could say, okay, are they worth that impact in revenue? I mean, if you go as far as building that feature and it adds two seconds to your speed index, that could mean 20% of your revenue. Is that feature worth it? Is it really gonna increase engagement as much as you think it is to overcome that? Not so sure. So with that mindset, this is, again, it's hard to see, I know, but this is like the latter half before you were seeing this side. And so again, less red, more green, if not yellow. So it was those kind of decisions that got us here, both those kinds of decisions and also, of course, focusing on what specifically can we do to improve it, which is the second part of this talk. But where did we end up by the time we launched? So again, same chart, but this time we've added red and that is where we were at during launch. So you can see with our speed index, we set pretty ambitious goals, but we exceeded them. So our site was loading in the viewport much faster than any of the competition that we measured and two, three, four times faster than the previous site was. Time to first bite. We didn't necessarily hit all the goals, but we hovered around 0.2, 0.3 seconds, which in the grand scheme of things, that's pretty good. That's totally fine for time to first bite. Start render, similar. We're hovering right around our goals, right around one second. So instead of on that PLP, you know, almost three seconds before a user even sees a pixel, now we're down to one second. Load time, we're starting to creep up there a little further from our goals. Not super excited to see that, but still kind of hanging in there with the indirect competition, and then kind of even more extreme with the fully loaded time, which we will get into a bit, you know, a bit of an improvement, but not nearly as much as we would like. So mission accomplished, at least for the most part, many of our goals were met or we got really close. So a drastic improvement regardless, we're ready to launch, but how did we get there? And that's the second part of the talk, which I promise will go quite a bit faster than the first. I know it's nearing the end of the day here. So when it comes to improving performance, I kind of just want to start with a bit of a warning. This is yet another often cited quote in the world of computer programming or web development, and that's that premature optimization is the root of all evil. So you can't necessarily just expect to write fast code as you're writing that code. You know, and you can't, yeah, you know, you can't micro-optimize your for loops when there's so much more going on with the rendering of your, I mean, you can optimize them, but that's not going to be what makes the difference. What you need to do instead, instead of just being like, oh, I'm gonna like build it and kind of pay attention, hopefully get it fast as we build it, you need to dedicate pre-launch time that you can focus on improving performance. So maybe that's a sprint, maybe that's two sprints, you know, maybe it's a couple sprints before you launch, you say, okay, these two weeks, this chunk of time, this is for performance. And again, kind of with the premature optimization thing, your custom code is a very small piece of the performance pie. There's so much more going on that you've got to look at the bigger picture. You've got to look at everything involved with the requests that the user's interacting with in their browser, and that's what we're gonna do. You've got to determine what each of those metrics we talked about is telling you. You know, why is it higher than our competition? Why is this specific metric, the one that's higher than our competition, that probably means there's something up there? So to kind of go through each one of those metrics, starting with time to first byte, the reason I said 0.2, 0.3 is actually good enough, fairly good at this point, is because it's been thought that for quite a while, Google's already probably been punishing time to first bytes that are higher than 0.4 seconds. So if I don't even get a byte back for about half a second, Google's not gonna like that, and it's probably gonna ding you. And also, I think it was on a little about podcast with some pantheon folks. They said, think of time to first byte as your racist starting line. So when the user gets their first byte back, that's when the race to load the rest of the page begins. So why would you let this number get out of hand this time to first byte if it's only gonna keep you at the starting line until that byte comes through? So how can we lower it? How can we win that race? So the first thing to look at and think about, of course, is my website cached and is that properly configured? So there's a ton that goes into caching. There's a lot to think about, a lot of configuration. But one thing to just kind of point out and highlight is assuming a lot of us are working with Drupal, Drupal caching can be enabled, but it can be broken. The caching for a specific page could be enabled, but something on that page is actually breaking it and you're having to re-render it every time or maybe the configuration isn't where you thought it was and a page that should be cached isn't or vice versa. And so there's a lot to look at with if your website's being cached and if that's configured. And then do you need to upgrade or tune the server's hardware or software? So, you know, these are things like disk space and RAM. Is there enough of that on the box that's serving your website? Have you said, have you gotten away from shared hosting, hopefully? Are you using PHP 7 instead of PHP 5, which that alone is quite a bit of an important performance improvement? Have you tuned my SQL? All things like that. Am I using a CDN? Should I be using a CDN? That's gonna lower the round trip distance of that time to first byte. So every request is gonna have to travel through less pipe, if you will, if the internet was actually powered by pipes. There would be a shorter distance there if you're using a CDN and that would reduce the time to first byte. Should I have things like Varnish or Redis in front of my site, in situations where the CDN gets bypassed or the cache has expired, things like that? And then could I reduce the number of redirects? So time to first byte is after all of the redirects. So if a user, for whatever reason, a nodes alias has been updated four times and when they go to visit it, they redirect four times, each one of those adds to that time to first byte. So we wanna reduce redirects by as many as possible. One, hopefully less than one. That would be zero. So then our start render time. So we're getting bytes back. How can we get the pixels painted on the page as fast as possible? Maybe they're not important pixels, but if there's something, our user's gonna think, okay, this is loading. If I wait a little bit longer, more pixels are gonna appear. So there's a lot to this slide. I'm not gonna go into all of it. Again, these slides will be available online or you can always reach out. There's a ton you can do in terms of requests. You can have less requests. You can maybe move render blocking ones that are in the head that don't need to be there like a certain JavaScript. You could put them in the bottom. That's been a common practice for so long. You could defer them instead. You could tell the browser to fetch them before it normally would to kind of have them ready. You could get a service worker going. If you're in the progressive web app game, which is definitely something to look into. Again, a whole other presentation. But there's a ton that you can do to kind of look at your requests and that waterfall and think about how can you handle those better so that your content isn't blocked from rendering because CSS and JavaScript, by default, the browser's not gonna paint anything until those are fully downloaded because they don't know if the CSS is gonna impact that performance piece or that piece up top. We don't want flash of unstyled content. So having as little as possible as render blocking is the way to go. And then load time. How can we get a usable page as soon as possible? A big one here is enabling HTTP2. It requires HTTPS. But if you're able to pull it off and flip that switch, it significantly reduces the impact of individual requests because it doesn't require a whole other handshake and all the technical stuff that goes on with the typical requests, kind of opens up a pipe and starts streaming those. Again, a whole other session, but there's huge gains to be had there. So that's definitely something to look into. And then it's kind of the common stuff we might all be familiar with as, at least as I guess I shouldn't say we might all, because I'm a front end developer, but as a front end developer, you're always thinking, have I optimized my CSS and JavaScript? Is it aggregated? Is it minified? Is it properly compressed? Is it being cached? Same thing with images. Are those being compressed? Are they serving in the best formats they could? We now have formats like WebP that are way faster and smaller file size for browsers like Chrome. Are they sized? Are we using image styles to deliver as small of an image as possible for our users? Are they properly cached? Should we be lazy loading the ones that are showing up under the fold? A ton we could do with images. I mean, images alone is probably should be your starting point when you're starting to look at your start render and fully loaded and stuff like that. So finally, I mean, the same thing goes for fonts. Fonts kind of warrant their own slide because we don't often think about it. Or again, at least as a front end developer, you don't. You say, oh, I'm getting this font from Google. That's super helpful. I don't even have to put it in my project. But actually a lot of those Google requests end up being super slow. So should we deliver those a different way? Can we have less characters in our font because we're not actually using all the characters? So that would reduce the file size as would compressing it. Are those files even being cached? However, we're delivering them. And then fully loaded time. Last one. So how can we get out of the user's way as soon as possible? Again, this is kind of where we got burned on our site just because there were so many third party scripts. I think I talked about modal windows and image carousels. We won that battle, but we didn't win the third party script battle, unfortunately, and it showed up in the results. But I mean, this is something we showed to the client and we said, you know, you can have all the analytics and the tracking and everything, but this is what it's gonna do to the page. So at least they're aware. And that's, I mean, when it comes down to fully loaded, that's typically the first, the culprit you're gonna wanna look at are my third party scripts under control. When going through kind of each metric one by one, I didn't necessarily go into speed index and that's because fixing the older metrics are largely going to fix your speed index. You don't really look at the speed index and think, okay, how am I gonna get that down by 100 milliseconds? It doesn't really give you much insight into that. What does give you the insight are the traditional metrics, you fix those, and then that's when your speed index starts to come down. That said, as the slide says, there's some techniques that can shave time specifically off speed index, like inlining some critical CSS, so it's not render blacking on the top of the page and lazy loading the images that are below so the ones above load much faster. There are things you can do there, but I mean, those are things that you think about when you're thinking about those other metrics as well. So mission accomplished. What did we learn? We learned that performance is super important. One second could cause 10% of users to drop off and in most cases, that would be very detrimental to our business. We learned about audits, all the metrics that we like to measure, how we're gonna measure those metrics, getting our benchmarks so we even know what numbers are good numbers, creating ambitious goals from those numbers and then tracking results weekly so that we can follow up the progress on the improvements we're attempting to make and also so that those numbers can inform the decisions of where we want to focus our time on next. And again, with that time, we're dedicating pre-launch time to focusing on improving performance. We're figuring out what each metric is telling us. We're identifying ways to improve those specific metrics. Only you can improve performance. Go forth and conquer. Thank you for your time. Any questions? Yeah, I'll upload the slides to the node on the Drupal Khan website and they should be available there. And the question was, can I provide a link to the slides? Hey, Gus. Let's say I'm somebody who doesn't know how to deal with all this. I don't have a lot of budget. I'm running a Drupal site, serving a lot of anonymous users. What's like the first two things I should do to speed up performance? CDN, PHP7, what are the quick hits that people in this audience that are in that boat might say, oh, I can go back to my team tomorrow and we can do these things? Sure. Yeah, first, I would recommend running webpage tests because the things you listed, PHP7, CDN, those are all definitely gonna help, but maybe that's not why your page is hanging. So I would first recommend, look at webpage tests, see what metric is so big and then do a little research on what you can do to improve that. And sure for many things, yeah, PHP7 might be a huge win, but I think you always kinda wanna have those metrics to inform that. I know that can be difficult when you're not super tech savvy, but kinda finding the happy medium there is what I'd recommend. So the first two tools that you showed looked really interesting. Yeah. We run a kind of B2B site that's nearly 100% behind a login, which is a custom SSO integration. So what kind of options do we have for those free hosted tools to get in? Yeah, that's tough. I don't know, so we had a situation where it was behind like HD password authentication, like HD access, that you can plug into webpage tests. I don't know if like the free web browser base, you could like give some fake credentials and get it to login. I think what you might have to start doing is kind of running your own private instance of webpage tests and there's an API along that, even Lighthouse, a ton of CLI options. And I don't know specifically because I haven't had to do that, but I would not be surprised if there's a way to kind of say, okay, here's where the test has to log in. Here's some dummy credentials that'll get them logged in. And then from there, we want them to go to a certain page and measure it. I mean, Lighthouse, you can, because it's in your DevTools. But again, that's inconsistent. So yeah, you want those? The, yeah, webpage tests, you can do like your own private instance. You can host your own copy. And then Lighthouse, you could just clone on GitHub and it's fairly straightforward to use. Well, not straightforward, but it's easy to go from there. Do you have any recommendations or insights into cash warming and specifically for like sites that don't get hit a lot? And if somebody's always a first time user, they're always getting the uncached version. And even if the cash version loads really fast, if uncached version is sort of the de facto user experience, how do you solve that? Yeah, definitely. I will say on the project, I was giving the example of, I don't think we thought enough about cash warming. It's something I've experienced in the past. I think there are Drupal modules that attempt to do it, but it's been a while since I've looked. So, I mean, you kind of answered it to the extent of my knowledge. I'd say, yeah, look into cash warming, but unfortunately I don't have specific recommendations for that. I was, you plugged one presentation that's tomorrow. I was also going to plug another one on Thursday. Okay. How to get a 100 out of 100 on PageSpeed test. Okay. Is that yours? That's mine, yeah. Okay, cool. So, if you want to know exactly how to accomplish what he's talking about, I go through it in detail. I have a demo that takes about two minutes. It goes from a 54 up to a 99. So. Yeah, and that's Google's tool, and they're the ones that are going to start dinging. So. Exactly. Awesome. Appreciate it. So, you showed your weekly page audits, and you said you did that manually for a long time, and you're doing something different now. I'm in central IT for a university, and we turn sites over to content authors who then like upload a gig of tips in a carousel and then perform at the camps. You've been there, yeah. Do you know of a tool that I could use to kind of passively monitor? I think it, well, there's speed curves. So, the last one I think would be useful if you're willing to fork over a little bit of cash. Otherwise, a webpage test or lighthouse, you'd probably have to kind of spin up a server and get those on there. So, you'd need, you know, I don't know what the structure of your team and all of that, you'd have to do that if you're trying to go the free route. But then, I mean with, you could use a webpage test API or lighthouse to just generate numbers and then kind of the tracking and what you do with those is up to you. That's why speed curve is so helpful is just because it does the tracking. It gives you the number, you know, it's, so, I think it's like 20 bucks a month. You can do quite a bit with speed curve. So, that's where I'd suggest looking first. Thanks. Yep. Thanks for the excellent session. Thank you. So, there are two ways of looking at page speed. Okay, one is a traditional way where you analyze the waterfall of the resources. Yep. And the second is the visual metrics. Yep. Where you actually render it in a virtual browser and see the diff, right? Yep. Now, applications like with a lot of React and Angular coming up, the JS is getting monstrous, right? On the front end. Yeah. And, but all the third party JS, right? They are recommended to be loaded in a synchronous way, right? A synchronous way, right? So, they do not affect the rendering of the, for the end user, right? So, what do you prefer? What is your personal choice on going forward? Do you prefer the visual metrics or do you go for the traditional metrics? I prefer both because, you know, you could go for, you could go for the visual metrics, but those scripts that are kind of loading in the background, while they're not affecting the visual page, they're gonna slow down your experience. You know, you're trying to scroll and there's a thousand things that are trying to track what you're doing and executing or loading themselves. So, that's why I like the traditional metrics, like fully loaded, especially for those kinds of things. But then speed index, I still consider super important because again, aside from like janky scrolling or whatever's happening from the traditional metrics, you know, the speed index reflects when a user actually thinks their page is loaded. So, I like both. Yeah. Is there anything that changes when you're working with a single page application? I mean, the tools that you recommended, you know, the measures that you recommended? Yeah. I mean, I don't have experience with that. I would assume in many cases a single page app, you're still gonna have URLs that are changing. I don't know. In which case you could still hit them. If not, I think it'd be a little trickier. That might be when you start to really get into using Lighthouse again, you know, pulling that down. And it's powerful, it's JavaScript based. So, you could probably have it do things like execute certain interactions or at least use something else to execute certain interactions, get you at a certain point and measure the performance of that kind of stuff with Lighthouse. Thank you. Yup. For the web page testing, page speed, especially when you're working with a client or on a project, do you run multiple tests in each one? Because we've noticed occasionally we'll get numbers, you know, we'll run two or three in a row and the numbers can be widely different. Yeah. Yeah, and that's tougher to the point that was made earlier when the cache isn't getting warmed. Your first test, especially as you're developing a site and it's not like you're having a ton of users hit that site, your first results could probably be cache, or not cache, and then you're gonna get way higher numbers, which again, you should worry about that too, of course. But you wanna think, okay, in an ideal world when we've got the warming and this and that, what are our metrics? I found that the closer you get to launch and the more people are actually using it, the more they kinda stabilize. And the fact that webpage tests runs three tests and kinda averages them, that's well. I think if you were continuing to get a lot of erratic results, I would start to be curious, and maybe that's not necessarily a webpage test fault, that's something you gotta dig into and start to figure out which metrics are showing wildly different results. Why, maybe in a certain situation, like some function is getting hung, you know what I mean? And really start to dig in there. Thanks, yep. If using a CDN is not an option, do you have any suggestions on other low-hanging fruit methods to improve image performance? I mean, yeah, image performance is a big can of worms, but I mean, so like, responsive image tags, for example, you know, we're all building responsive websites, most of us are serving the same markup to any device, regardless of what it is. So if you're not careful, you could be sending your huge image to every single device. So responsive image tags that kinda use image styles and Drupal if you're working with Drupal, you know, then the mobile users get that version. So I mean, that's huge, that's your biggest one. But then server-side compression, you know, Drupal, again, I don't know if you're using Drupal or not, but it's, you can tweak the number of like, oh, I want this 80% compressed, or you know, every image to be that compressed. So yeah, I think I cheat sheet once that, oh, I had to go much farther back than I expected, sorry. Yeah, oh, and then formats are kinda tricky. I mentioned WebP, but it's pretty tough when you don't have like a powerful CDN to switch those formats out for you based on the browser. But then lazy loading, also a huge one, I don't know if you're familiar. You know, you can implement some JavaScript that's gonna load only, only load images as you need to see them. So it's kinda running through all that kinda stuff. You know, the compression is gonna be, it's gonna help you on those first time-to-first bites with those images, and then the lazy loading is just gonna help the overall page. Is there a compression percentage that you recommend? It's kinda tough, you kinda have to test. I feel like 70 to 80% is where a lot of people land to have significantly compressed images, but not to where they start to look terrible. If you start going to like 60 or lower, then it gets pretty messy. Thank you. Yep. I was gonna help you here. Go for it. There's a module called Image Optum, which will run, it'll give you an image style that can then compress lossless compression, and that will take care of basically zero quality loss, but it'll still shrink the image size down. And then also, if you don't have a CDN, HTTP2 really helps with that. Yeah. Absolutely huge. Yeah, if you can flip that switch. I mean, and that's not limited to images. If you can get HTTP2 on, it's huge. I mean, yeah, that would be my biggest. You know, if you can't do anything else, but you can flip that, that's the way to go. You have to have HTTPS, but soon enough, Google wants everyone to have HTTPS, so I would hop on that train. Cool. Thanks again. Appreciate it.