 Hi there, guys. Looks like we just got live. Just confirming that here on the screen. Yes, perfect. We're live right now. Five minute delay. And now we are on air for you guys to speak about, and I hope you guys can hear me well, to speak about the illusion of speed. I got the mic here. To speak about the illusion of speed. So because recently we've been talking a lot about in-depth optimization techniques. And today we want to sort of add the other side of the equation, the user. And what type of studies and research is out there in terms of user perception in regards to speed. And what sort of repercussions that has for optimization techniques that we can apply in order to get the sites to appear to load faster to the user. Because metrics is not the one and only sort of ingredient in the soup of optimization, but also how we actually deliver information to the user. Because ultimately it's people using our sites, right? So that we're not just getting stuck on the key optimization metrics. Great, so that's why we today want to talk about the illusion of speed. As always with me here is Dom from our German mobile site transformation team. And yeah, maybe just you want to greet our dear friends on the other side of the hackathon on air. Hey guys, good to meet you again. As you probably know by now, we run this once a month. So we're quite excited to talk about more like the psychology behind speed today. Usually we're going way more into depth in terms of technical stuff and how to optimize your site. But now we want to speak a little bit more about why do we actually do all of this. So yeah, I hope you enjoy it. And as always, if you have any questions, just pop them into the live chat. We're trying to answer them as good as possible. We also have a colleague sitting with us here. And he's going to be in chat throughout the session. So like any question, just ask us and we're happy to address them. So yeah, so let's get started. Perfect. Great. So as Dominic just said, today we look a bit more on the psychology side of things. But maybe a small intro, Dom. Why are we looking at this topic in general at all? It's basically when we're talking about UX, mobile sites we just sort of like the bare bone of all of it. So you can have the prettiest site, the most beautiful UI. But if it takes really long to load, you have a huge drop-off and a vast majority of the people of the users won't even see it. And in this graph, we just kind of like show you how mobile sites we correlate with bounce rate and conversion rate. So you can see that at about 2.4 seconds, we got the sweet spot where the bounce rate is the lowest and the conversion rate is the highest. And if you look at the two yellow bubbles, the median speed index and the average speed index, then we're looking at how long does it take for a mobile site on a global scale through all verticals to load the critical content. And if we're looking at the second yellow bubble, it's like about 10 seconds. And then we look back again at the sweet spot of 2.4 seconds, then we just see like there's a huge scrappancy. And we just want to ensure that you have the highest chance to convert your users, which is important if you rank organically, but even more important if you actually pay for a click. Exactly. Great. Some examples you got prepared there, right? Yeah, I mean, it's always easy to talk about it and say like, yeah, definitely if you speed up your site, you'll definitely see like a massive improvement. But we think it's always important to back it up. So we have a couple of external case studies. So a few guys from our teams, they work, for example, with Adyamore. And you can see that they achieve that 100% of their pages load in under five seconds on a 3G network. Probably at that point, it's important to address that whenever we talk about mobile speed, we're looking at speed under kind of like the 3G network circumstance, basically, because we think that if you optimized on the worst possible case, you kind of like got covered all of the users that are coming to your site. And they achieve that the page load is under five seconds. And what you see is that the amount of mobile sessions increased by 7%. And more importantly, the mobile revenue during the peak season, which is the holiday season, increased by 38%. So I think it's really important at that point also to mention that it's not just mobile side speed. It's side speed and other UX improvements. But as I said at the beginning, mobile side speed is sort of like the base of everything. So you can basically see that it has a really, really positive impact on the return on investment. And just to throw in there another case study that is PMX Agency, which basically worked with people at Google on basically delivering sort of like workshops for their own clients. And what they achieved is that 46% of the AMP sites of their clients were able to improve their site speed, which resulted in 8% more revenue and a 12% decrease in cost per order. So I think this is really important, because ideally, you want to have the lowest cost per order. And 12% is a significant drop, which is really, really positive. So those are just two case studies that show you that speed is important. But I'm sure if you just Google a little bit and just read about the whole topic, you will see that there are so many examples on the web that show you that an improvement in side speed will result in just much more revenue that you basically collect through mobile users. Great. So that seems to be some pretty solid backup for this whole idea of having a second look on mobile speed. And today, we want to take a little bit of a different approach. As I said before, we took in-depth sessions on we spoke about the carousel, the checkout funnel. We spoke about the whole pre-family, actually in the last hackathon on AIR. We spoke about just generic optimization techniques before that. So today, we want to take a little bit of a different angle and also capture the site that is on the receiving end, the full content that we're sending out. And there is quite some nice content out there, actually, that you can find. We're going to link to some nice articles afterwards in the description. But we want to summarize this for you and give you a quick overview of that. So there's a nice quote from our nice little cute cat from Alice Adventures in Wonderland, which says that imagination is the only real weapon in the war against reality. So given that we already do a lot of work on the sort of hard metric side of things, the actual performance of the site or the objective time it takes to load the site, we can start maybe playing a little bit with the psychological time that it takes to actually load the site in the eyes of the user. And there is actually small things that we can take into consideration because there will be a difference between the objective time and the psychological time. And we want to go into the different ideas and quickly touch on them so that you have them at least in the back of your mind when you're going through the next round of optimization. Yeah, as I said before, we have covered a lot of different things to get content out quicker to the user from very easy steps like code minification, the JSCSS minification, for example, image optimization, a lot of compression techniques or the way you load images with lazy loading, as you see there in the idea of critical CSS extraction, various different techniques that we've been going through in depth who help you guys to decrease your speed index, just to reiterate on that. Speed index is basically the metric that we want to minimize because it basically gives us the time that it takes to paint the full content above the fold so that we can give the user a meaning and a call to action basically to do on the site. So after having gone through all of this, there's obviously the other side to taking into consideration with the users because there's quite some research out there. And as I said before, we link to the studies and the names of the studies in the description after the hackathon. There's quite some studies out there that really just suggest the idea that for the user who is actually getting the content, the perception of the user can be that a site would load up to 15% slower than it actually does. So we don't only have sort of the barrier of the bandwidth, we don't only have the barrier of sending our numbers, images out there to get our carousel out to user, but on top of that, we also have the challenge that users might actually perceive that content to be loading slower. And what we wanna do is we wanna go through that in more detail to see where is steps that we can sort of circumvent that effect so that users might not necessarily end up thinking or perceiving content to be loaded even slower than it actually does. So with that, basically, we want to go into the idea of comparing the objective time, which we optimized already quite a bit and many of you guys are already having big successes with that and looking into the psychological time that it takes in the user's eye until content is ready. Because if you're designing for humans, it is helpful to keep in mind how their mind actually works and how long their attention span is. It's actually quite interesting that there's this idea of with the internet and all its information stream, people would get more impatient, but the actual patience that user have for waiting for a particular time or people have for a particular time is actually relatively stable along many years of time so far. So if we look at this particular graph, we can really see that an action or a reaction to an action that takes longer, it takes about 0.7 seconds and less, sort of creates the illusion of an instant response. It instantly feels like you got it. While everything that takes up to a second, the user or the person is perfectly maintaining the flow of their thoughts. And this is actually what we want to do because the user went out there to complete task and we do not want to disrupt that flow of task towards task success, which we put the user to go through. So obviously if we are loading in that time, there's no interruption in the flow whatsoever. Different studies suggest slightly different times there. For example, we've been seeing also research that goes up to five seconds. That's why I think we usually always take the stand to at least load your above the full content under five seconds on 3G. And if we start to load longer, so if we start getting user to the content longer than 10 seconds, it's really, really hard for the user to keep attention, to stay focused on the one particular task. Doesn't mean the user will necessarily leave your site. Obviously there's a lot of metrics that suggest this idea, but definitely the user is not 100% focused on this particular task, has to refine focus, has to re-decide to actually do that particular action and yeah, that's something we want to avoid, right? We definitely want to make sure that we give the user the right content to keep them in their flow, to finish the task. So in the end, optimize all your metrics that you have in regards to conversions. Because the last thing we want to do is by us not delivering content, basically sort of producing our success ourselves. And there's a lot of metrics and a lot of research that underlines this idea that mobile users react really badly when this flow of thought, this flow of the task flow is being broken. So we can see that, I don't know, 39%. This is a particular study from Akami, but there is a lot of research around this out there. So we can see a large percentage of users being unhappy just generally using mobile, right? They're having, they mentioned to have had performance issues. They might not return to a site because it was loading slow. So there's a lot of this situation takes a lot of different forms in terms of answers by the users. But in the end, the underlying idea is if we break the flow, if we don't support a task success by giving them a fast experience, we're gonna have one or the other form of a negative reaction by the user, which might even result in they not returning to our site. So that's just like sort of like, how does the user deal with the objective time, right? We have objective time that it takes to load our site and how does a user react in regards to this time periods? If we now go one step further and say, okay, we started optimizing our site, we started to take those optimization techniques that the team from the mobile transformation team or any other blog posts, right? There's a lot of blog posts out there that have great ideas that feed also the information that we take on in the team here. But I took on these optimization techniques, but nothing changed, right? Nothing changed, metrics didn't change. I think one thing that is important to take into consideration is here, how long does it actually take? How much of a change does it take that the user will really notice a difference? We really notice a difference and sort of an improvement in the site speed. Behind that, there's a very granular study that goes over many, many years and it's not necessarily only regards to web performance, but just in general to how long it takes until the stimuli, how big a stimuli has to be proportional to the magnitude of the stimuli. But what is the key finding, or let's say a thumb rule, is rule of thumb would be if you're not able to change your site speed by more than 20%, it's very unlikely that user will even notice, right? So on the safe side, if you're on the safe side, if you want to be on the safe side, have at least an improvement of 30%, somewhere around that area, until anybody will even notice, because if users don't notice any difference in your site, I mean, how would they ever perceive that something changed? Yeah, so the Veeber & Fashion Law is basically the guys who a long time ago already looked into this topic, as I said before, not necessarily only web work related, but ultimately it's how long can we, how quick do we notice any differences? And yeah, with that, I want to just quickly jump into what makes actually waiting? What different phases do we have in the waiting time that a user goes through? So it's very simple. Time basically has an end marker as a start marker and an end marker, right? And between those markers, what happens? And that's what we want to optimize for. So if the user has a task that they can do, if the user is in a sense distracted, if the user does not have to, I don't know, there's not any rumoration in their mind about his wait time, we call it active wait time. The user is distracted and is basically not focused on waiting. And in many cases, does not necessarily perceive the time as a wait time versus the passive wait time where a user does not have any particular task, does not see any content, does not see any progress, is what we sort of consider as the passive wait time. And ultimately this passive wait time is really hurtful for our sites because if we give the user no content for quite a while and that turns into a passive wait time, we can see as an example here, while humans are usually overestimate that passive wait time by 36%, it gets even worse than it already is, right? So we have, if we see that on a daily basis, no rendering in the first four to five seconds, given that is being sort of understood as a passive wait time, the user can't do anything, or they will switch task, right? Break out their flow, do something very different, might never return to that particular task. But if they actually stay on the site, remain on the screen, have a white screen for quite a while, this in many cases will be perceived even longer than it actually is, right? So not only do we have the problem that we have like a white screen for quite a long time, but user actually perceives it worse than all metrics suggests, which just gives more sort of emphasis to the idea that we definitely have to avoid any passive wait time that the user can encounter, and in this situation, it being just a white screen where the user can't do nothing, can't do any action, and with that sort of the whole idea, so that we've been discussing before, critical rendering path become sort of even more important to not only work against, we don't only work against the barrier of the actual time, but even the barrier that users have in their mind. And so how can you think about the optimization there? Two very different, very simple starting points, right? One being the preemptive start, and the other being the early completion. So if we are able to give the user an active wait time from the very beginning, a preemptive start, we push them out of the passive wait time, and basically give them content straight away. On the other hand, on the early completion, if we are able to push the end marker of the time to wait towards the start point, so basically sort of giving a perceived shortage, a shortage of the time, that's also something we can do, and that's where we then actually go into like actionable things to think about when talking about a mobile performance optimization. And before we go through a couple of things that sort of tie into this idea, just very quick example to make it actionable and real. So the preemptive start, so we move the start marker by opening our event with the active phase rather than the passive phase, right? So basically we give the user straight away stuff that he can do something with. And holding that active stage ideally as long as possible. So the whole hackathon, basically last time, the whole idea about DNS pre-fetch, pre-connect, pre-fetch, pre-render, is all about that, right? It's all about moving that start marker to an early point in time so that we have already content ready when the user actually gets sort of signed. So is there anything you want to mention on DNS pre-fetch a small recap? Is there something that comes to your mind or anything is better if we're checking out the hackathon? Yeah, I think it's definitely worth it to check the last hackathon session because we really talked about that in-depth. As you already said correctly, it's like the whole pre-family is an attempt to resolve certain connections as early as possible so that we don't further delay the rendering later on. So DNS pre-fetch, pre-connect, I think what's important to mention is that you can actually use both. So for example, if you want to pre-connect to, for example, the gstatic.com domain when you're loading a web font, you can try to pre-connect and use DNS pre-fetch as a fallback. So if pre-connect doesn't work or if it's not supported by your browser, then it just falls back to the DNS pre-fetch which doesn't do the whole connection, but at least it translates the domain name into an IP. I think the most important thing is that DNS pre-fetch or pre-connect are optimization hints. So that means that the browser doesn't need to act on it necessarily. So obviously the browser does a lot of calculations and tries to be really smart about what you need next and what resource might be critical. And so if you implement pre-connect or pre-fetch and you run a couple of tests and you kind of like you get them, you see like, oh, it doesn't actually pre-connect. Don't waste time debugging. Again, it's an optimization hint and the browser might just not act on it this time, but it does do so at a later time. So I guess that's just one important thing to mention. However, the last session, we really covered this topic in depth so highly recommend it to just watch it and if there are any questions, post some comments and we'll try to answer them as quickly and as good as possible. Perfect. So first sort of idea to think about it is obviously by moving the start marker, getting things done earlier. Second thing is obviously moving the end marker, the end marker of the timeframe closer to the beginning, closer to the start. And one very obvious example for this is to say, okay, let's not wait until we have all the content ready, make a big reveal to the user and spit out the content all in once, as we see here on the second example. But really think about progressive rendering and we've touched on that a couple of times, thinking about progressive rendering in order to achieve this sort of end marker to be pushed towards the beginning. Similar example would be, let's say, I mean, not necessarily actionable on the mobile side, but for example, same is happening on YouTube, right? So you start watching a video and instead of waiting for the whole video to be ready, we push the end marker closer to the start and the user can start watching chunks of the video. And that's a similar idea of what we want to achieve here and I think really looking at the, like moving those markers and optimizing content to be delivered to the user is what we want to go into depth now a little bit and go through a couple of techniques on how you can manipulate that a little bit without necessarily always actually changing the overall time that it takes to load. Sometimes that will also happen as a good side effect and some of the things, but yeah, really thinking about this idea of maximizing active way time, minimizing passive way time by moving those variables. Cool, yeah, thanks for that. I mean, that was super interesting. I think it's really important to understand the psychology behind way time to just make more sense out of it and kind of like realize why it's so critical to optimize your website for mobile speed. So as we said at the beginning, we don't want to go too much into detail this time. So we're covering certain aspects more high level, but if you have any questions, like we ran a few sessions in the previous months where we covered certain optimization techniques. So like definitely feel free to check them out. I think one important thing to start with is what actually is the critical rendering path in terms of like timeline. Is it one second? Is it three? Is it seven? Because it's obviously always sort of relative. We kind of believe it's three seconds because we aggregated and anonymized Google Analytics data. And what we see on average is that 53% of mobile site visits are abandoned if a page takes longer than three seconds to load. So again, this is measured on a global scale through all verticals. So it might be completely different in your vertical or on your specific website. However, let it be that not 53% drop but 25 or 20%, you really don't want to lose every fourth or fifth visitor, especially if you invest a lot of money in your website. So that's why we kind of like say the first three seconds are crucial and it's even more crucial that in the first three seconds you start to rendering. Like if you show something, you get the speed perception up to turn the passive weight into an active weight as Lucas just earlier explained. So let's just get into analyzing the critical rendering path and just understand it a little bit more. So these are like really, really simple examples and they're by no means like representative to like a full website or even like an e-commerce shop. But to just get you an idea how it actually works and how do we block the rendering and how can we speed it up? So if you think about it that you request an HTML like a document and that document contains a photo then your rendering path would look sort of like that. So you request the HTML, you wait for the server to respond which is the time to first byte. You get the response, you build the document object model and you start rendering the page, right? In this case, as soon as you get the response you can immediately build the DOM because you have no render blocking resources. So what that means is basically if you look what I sort of like a circle red you see like that the DOM content loaded time is 216 milliseconds and then the load time is 335 milliseconds which basically means that after roughly 0.2 seconds we start rendering and after 0.3, 0.3 seconds we're done. The discrepancy is purely because we still need some time to load the photo but it's not render blocking. So like we can still start rendering and then we're just waiting for the photo, for the picture. And there are some techniques to like get the speed perception, improve it there as well. So like we're talking about progressive JPEGs versus baseline JPEGs which sort of gives the user just a better experience because you kind of like see slowly the image loading until it's crisp and clear rather than waiting for it to kind of like build pixel by pixel. All right, so like as I said, we got the rendering after 0.2 seconds and then we got the load time at 0.3. Now most websites don't just have one image obviously. So you probably need to load some CSS to style your side. You probably need some JavaScript to implement some dynamic features. And what happens now is that you request the HML, you get the response, you build the document object. Well, you start building the document object model but now the browser hits the CSS and the JavaScript file. So what happens now is the rendering is blocked and the parser is blocked until we fully load these two files. And basically what happens is like so in this moment we don't do anything. The browser doesn't do anything and the result is this white screen. White screen where nothing is happening. Once we have those resources we build the CSS object model, we then run the JavaScript, then we build the document object model and then we render the page. So as you can see, just by loading CSS and JavaScript we heavily delay the process. And to show you actually the effect is as before we just loaded the HML in the photo, now we do the HML, a CSS, a JavaScript and a JPEG. And you can see that the DOM content loaded time is suddenly almost equal to the load time. So the DOM content load is at 0.3 seconds and the load time is at 0.34 seconds. So you might argue that, well, what's the big deal? I loaded CSS and JavaScript, the load time is still the same so I'm not too worried about it. It's true, but I think the thing to focus on here is that you start rendering much, much later and that means that the white screen happens for much longer on the user's device and now we're back at the waiting time. A white screen doesn't tell the user anything, it's the user cannot be active, cannot kind of like see the content slowly putting together, it's just like a white screen, there's no message, the user doesn't know what's going on. So now maybe if we're talking about 0.2 versus 0.3 seconds that is not an issue but if it's a difference between two seconds and five seconds it actually becomes an issue. So that's why it's really, really important to focus on what resources do you load and when do you load them and how do they actually affect the rendering of your site. So now there are multiple options, what you can do in order to improve this because we totally understand you will always need some CSS and always need some JavaScript, well not always but often you need some JavaScript. So it's probably unrealistic to say just get rid of all of it but there are some very cool ways that you can implement in order to speed it up. So if we look at that, so the asynchronous loading of resources, so again we request the HML, we get the response, we build the DOM and now instead of loading and kind of parsing the CSS and the JavaScript instantly and blocking the rendering of the page we do that while we render the page. So you can see here that in this example the CSS is non-render blocking and the JavaScript is async. So what happens is that we build the DOM, we start to render the page and once we have the resources, the CSS and the JavaScript resources we then build the CSS object model and we then run the JavaScript. So let's go back to the previous slide. You can see how building the DOM is blocked, we then build the CSS object model, we then run the JS, we then finish building the DOM and then we render the page whereas when we actually load resources asynchronously we start with the rendering much, much quicker and the big benefit here is that we actually keep the user occupied while we load resources and this is really what we're talking about when saying you need to optimize the critical rendering path. It's not so much about load everything as quickly as possible and just finish loading the site in the shortest possible time. It's more like get the speed perception going, like get the rendering going so that the user is occupied and doesn't bounce as quick. So therefore there are two ways of loading the JavaScript asynchronous. So that's async or defer. In both cases what happens is that the script download happens while we parse the HTML and the only difference is that while when we load JavaScript async we shortly pause the parsing to execute the script. When we use defer we actually execute the script at the very end. It depends what you wanna use like I mean the common thing to do is like to load it async because clearly if you load JavaScript it's probably important to your site so you do wanna execute it as soon as it's loaded. However there are situations when you have JavaScript dependencies. So let's say you're having a jQuery plugin which obviously requires jQuery to function. In this case you don't wanna really use async because async does not necessarily keep the order of your files. So if you have a certain order and you load both files async-ronsely then the browser might reverse the order and then it basically breaks your plugin. So in this case you really wanna use defer because defer makes sure that the order is correct and each script is executed in the same way you implement it into your document. So again this is the best practice for JavaScript. It's really, really important to have no JavaScript dependencies in the critical rendering path. So thinking about slideshows that require jQuery and another jQuery plugin it really means that you delay the time until the user sees the critical content by quite a bit. And the great thing is what you can do with JavaScript you can actually do the same with CSS. So I think all of you would agree that when you load CSS and especially when you load a global CSS file then there are, there's a lot of CSS that's non-critical. You know you can use Chrome DevTools to see actually what part of the CSS you actually don't use and you'll be surprised how much it is for each page that you're loading it on. So obviously it makes sense to load the CSS because you don't really wanna delay you know you don't wanna kind of like delay the rendering on any subsequent page in the funnel. So but it does really make sense to load all the CSS that you need somewhere across your site on the entry page. So like let it be the home page or a product page and delay the rendering for basically nothing. So the great thing here is that there are some ways to load non-critical CSS asynchronously. So there's preload that you can basically use in the in the rel attribute. And basically what it does is it fetches the CSS but it does so without blocking the rendering. So the browser doesn't wait any longer for the CSS to be loaded and basically builds the render tree right away. So now preload there's you know the problem with preload is that the cross browser compatibility is not great. So there are a lot of browsers that don't support it and so you really need like a fallback you need a polyfill for these situations. And that's where load CSS comes into play. I put the link there to the GitHub page and it's a really cool JavaScript function because basically at first it does nothing. It basically you implement it the way we show that there and it doesn't do anything. It basically just checks if the browser supports preload. If the browser does it doesn't do anything and you know and you're done. However, if the browser does not support it load CSS the load CSS function jumps in and basically does the same job. So basically what you can do with it is you can extract the critical CSS from your global CSS file and load it just as you would. Ideally even in line if that's possible because you save on one extra round trip to the server and all the non-critical CSS you can basically put into a separate file load it asynchronously, put it into the cache and make sure that on further down the line as in further down the page or on subsequent pages you have all the CSS cached so that it's ready to be applied and you basically speed up not only the entry page but all subsequent pages as well. The third critical resource to optimize our web fonts. As I said, so JavaScript and CSS is render blocking and web fonts are as well because a web font is based on two separate resources. So first there is the CSS resource and then there's the actual font file and those two resources always come together when you use a web font on your page. What's even worse is that if you load multiple variations of that font, so let's say you wanna have the bold variation for your headlines and you wanna have the normal variation for your text and you wanna have the italic variation for block quotes then you need to load three extra font files. So like with each variation and each new web font or you basically load up the amount of resources that you need to load and ultimately you use load on your side. Now that is the part about the font files but as I said, it also comes with a CSS file like a CSS resource and as we learned previously CSS is render blocking. So when you use a web font, you even further delay the rendering of your website. And if we look at it here, basically we get that HTML, we have to render block because of the CSS. Now we built the CSS object model. What's important to note here is that the render blocking CSS, that includes the CSS for the web font. So again, it's a little bit longer. If you load the web font from the third party, it increases even further because you need to do another DNS resolution, another TCP connection, another TLS negotiation. So now let's suppose we got all the CSS and we actually built the CSS object model. Well, now we still need the font file. So what happens now is that we start with the first paint but we're still not done with the font file and now basically we get the flash of invisible content. So the content is there during the first paint but because we're still missing the font file, the user doesn't see it because the Chrome browser, for example, waits for three seconds for a font file to load. If it doesn't happen within the three seconds, it automatically uses a fallback font. But let's suppose you do load the font file within three seconds. It still means that up to three seconds, the user doesn't see any content. And again, we learned about the short attention span of users and how important it is to display information as quickly as possible. So I think you get the point that it's really not ideal to use web font to display critical information and to further block the critical rendering path. However, web fonts are nice, are pretty, are important. So we don't wanna say don't use web fonts. That's not at all what we wanna say. We just wanna kind of like emphasize to use it smartly. So a couple ways you can do that and we just basically outlined a few here. So load web fonts asynchronously. There are multiple ways of doing that. There are a lot of JavaScript libraries that can do that. I'll just put the example here of web font loader. Another one is the font face observer. Basically what it does is with a bit of JavaScript, not too much, you're loading the font asynchronously and you have different CSS classes. So in this example, it's WF loading, WF active, WF inactive. And what it basically means is that the WF loading class is applied to your, let's say the body tag. And there you define the styles for a fallback font. So when the font has not loaded yet, the content is basically displayed in the fallback font. That could be Ariel, Georgia, whatever it might be. Now web font loader loads the font and signals that, okay, I got the font, it's there, it's done. What happens now is that the WF loading class is swapped out for the WF active class. And within the active class, you define the styles for the custom font. So in this moment, when you swap the classes, you also swap obviously the font. So you actually start presenting your content in the font that you want to present it in. That is a really, really good practice of doing it because you basically kill two birds with one stone. On the one hand, you do present your content in the font that you want. On the other hand, you don't further delay the rendering of your side. Now, there are probably a lot of people that say, well, but if I do that, it generates this flash of unsolved content, which means that when you swap out a fallback font for the custom font, you get like this flicker. And a lot of people don't like that. And it's a fair point, it doesn't look too pretty. I personally think it's not too bad. I mean, I think users are used to pages rendering and content jumping around a little bit. So I don't really see it as a big issue. However, I do understand that some people really don't like it. Most likely it would be just the first visit in most of the cases. Exactly, it's just the first visit because once it's cached, it's cached. So if you have a lot of returning visitors, they don't even have the problem of flickering content. But let's focus on the first time visit and say like, okay, or you might say, well, I really don't want this flicker. So that's understandable, but there are also like solutions to that. And one of the solution is the font style matcher. I put the link there as well. And the font style matcher is a really great tool because basically what it lets you do is it lets you select a fallback font. In this example, it was Georgia. It lets you select the web font. It has a huge selection of web fonts. So hopefully yours is part of it. In this case, it's Maryweather. And then it basically lets you play around a little bit with font weight, letter spacing, word spacing, font size, line height, and match up the fallback font with the web font as closely as possible. There are certain fonts where you can match fallback and web font so closely that the flicker is almost not noticeable. However, if you run into the situation, you should ask yourself if you actually want to use a web font. But you can really make it so similar that the flicker is very, very tiny and users wouldn't really be bothered anymore. So again, I highly recommend to use this technique to optimize the critical rendering path and to use font style matcher to reduce the flash of unstuck content. And there are, last but not least, there are a couple of other things that are always important to mention. One of it is the base 64 encoding of icons, for example. So what we see a lot is that we have mobile sites and they look something like, you know, like that, like on the screen right now. So you got like the hamburger icon for the navigation. You got the magnifying glass for the search. You got the basket, you know, for the shopping cart. And what sometimes happens is that those icons are loaded as part of a icon font. So icon fonts are great because, again, you know, it includes a lot of icons that you can probably use somewhere on your page. However, the problem is that in many cases, the icon fonts are loaded synchronously or loaded as part of a bigger icon pack, which means you're not even loading only those three icons that you need. You're loading like 30 or 50 icons and they're often hosted on a, you know, on a third-party server. So what happens now is that you include this icon font very early on your document because, you know, it makes sense. You think like, well, I need the icons, you know. And now the browser parses through the document and hits this resource and says, oh, hold on a second. I actually need that because, you know, we already talked about it, CSS, JavaScript and so on and so forth. So suddenly you're connecting to a third-party server. You know, you need to do another TLS negotiation. You need to download, you know, the font. So we're talking about, again, the font file only to display those three icons. So like you're heavily delaying the critical rendering path just by showing those three icons. And that really doesn't make any sense. So if you have a situation like this, it makes much more sense to extract the icons that are really critical to your content and then basic C4 encode them. Benefit here is that by basic C4 encoding them, you can embed them directly into the HTML document. You don't need an extra server round trip. And basically, you know, the browser can just parse through it and, you know, and displays it right away. So it's a really, really cool technique to speed up your site. One thing to be aware of is that basic C4 encoding usually enlarges your resource by 33%. So again, when you're looking at icons that are like maybe one kilobyte big or whatever, it's not really a biggie. However, you know, if you have a larger image, don't basic C4 encode it because you might not actually gain the performance benefit that you envisioned. However, if you say I have a hero image that I really want to render as quickly as possible, what you could do is you could really reduce the quality of that hero image, let's say down to six kilobyte, eight kilobyte, basic C4 encode it for an even quicker speed perception. And then for example, swap it out with JavaScript for a high resolution image, you know, on the onload event. Those are techniques that you obviously could do and it's, you know, I highly recommend to play around with it. But yeah, again, for, you know, a few icons in the header, basic C4 encoding is a really, you know, cool way of optimizing. Yeah, so those were like the most important, kind of like techniques at a very high level that you can work with. Again, if you want to learn more about one of the techniques, either, you know, just go ahead and browse. There's so much information out there. Check out our previous sessions or just, you know, comment and we'll, you know, we'll be happy to provide some more links or some more information on that. And from here, let's go into the situations where, you know, the user, you need to do certain things and it does take time and how you actually manage that way, Tim. Yeah, we're going to jump in there, just one small thing. There is an individual hackathon on there already out there on the critical rendering path in particular. So you can check that out. There's one particular on the carousels. So, so that we make sure we give hero image content quickly to the user very much in line with what Dom has been discussing. So make sure to also use those sessions when you find some of the topics to be just touched here and in this particular hackathon, just jump into those in-depth sessions where you can see links to the particular get up instances that can be helpful or, you know, just in-depth run through on this topics. Okay, so, yeah, sometimes we cannot really alter the time, right? Sometimes we cannot push content around to make it appear faster, to make it come up progressively on the screen. So, and then it's, we have to do something, right? And very similar to this idea of keeping people occupied, we find this in every elevator today, right? Like mirrors on all the sides or on at least one of the sides. Why? So that people actually stay busy either looking at themselves or secretly at other people in the elevator. So really keeping people busy in that task and by that overcoming, again, the passive way time and making it an active way time, whether the perception of this way time being much smaller or much, yeah, much, much, much tinier than the actual time it takes. There's more and more examples, more examples out there. Dominik has listed a couple for you guys here. Yeah, I mean, again, like, you know, Lukas mentioned it already. It's kind of like, you know, busy people are less impatient. So keep people occupied. And so they won't notice the weight. Those are just three examples kind of like taking from real life to show you how it actually works. I mean, Lukas already talked about the elevator, which is I think like a really great, really great example. Others include on the left hand side, it's, you know, it's the traffic light in New York. They're like six and a half thousand that have like those, you know, buttons. So you can, you know, you can kind of like switch to green, to green light. Four and a half thousand of the six and a half thousand don't work. What does New York City do? They still keep it because, you know, pedestrians believe it actually, you know, it makes the traffic light turn green quicker. The middle picture shows the tube in London. You know, you got the buttons to open and close the door. It doesn't really make sense because the driver does that anyway. But, you know, the London Underground keeps it because, you know, passengers on the tube, they believe like, okay, I can push the button and I can actually, you know, have an impact on how long it takes or how quickly the door opens. And last but not least, you see, you know, kind of like the people at the airport waiting for the luggage. That's actually Houston Airport. And the feedback Houston Airport received was that passengers said, you know, ah, it takes too long until the luggage arrives. So they were really annoyed about that. And logistically, Houston Airport was not able to improve that time because, you know, obviously, there are a lot of processes that need to be followed. So they were like, okay, we can actually improve the actual time. So what do we do? So they basically increased the time it takes from the gate to go to pick up your luggage by six so that people were occupied walking. And as a result, the passengers' feedback was, well, it's great at this airport. Like, as soon as I want to pick up my luggage is they are already. So I think those are really good examples, you know, of how you can optimize kind of like wait time without really optimizing it. I mean, in the last case, even actually increasing the actual time it takes by but replacing a passive wait time that you stand in front of the machine to get your luggage with a walk, right? And by that, having a much different perception of time and by that, having much better metrics. Okay, but what does it mean for the web? Obviously many people are aware of the usage of animations and we don't necessarily tell you guys something new by mentioning them, but what I think what we wanted to do is maybe at least place them, place that kind of animations into the spectrum of sort of the perception that users have and the willingness to wait, right? So at the very beginning, when we are getting towards like a one-second load and maybe slightly above that, maybe one or two seconds, what makes a lot of sense is to have an animation in there if you want to indicate that something is happening, have an animation in there which does not carry any text, for example, and does not give you any progress bar because we do not actually want to disrupt the flow of the user, the flow of thought and the flow of the experience that users have. If you're at this point in time, spinners are a great thing or spinners with your logo. I think actually spinners with a logo are a very interesting idea because what we very, very often see is due to the delay of the rendering, even logos on mobile appear sometimes after like eight, nine seconds, sometimes even 10 seconds I've seen of low time and then this is as long as it takes then to get your user to know that which brand they landed on. So if you're sort of like playing with animations in this timeframe, it makes sense to have very light animations that not necessarily add any cognitive load to the user but just basically confirm that they're at the right place and that something is happening. And as we're moving along the time axis, obviously that changes and we have to sort of cater to different user expectations. So if we move into that phase where user keep their attention, keep it barely somewhere between five to 10 seconds, it definitely makes sense to give the user a bit more information about what is happening. Progress bars are a great example at this point in time. And there is, if you Google a little bit about it, there's obviously a lot of research on which type of progress bars work the best sort of like backwards walking stripes within the loading. I think Gmail is actually using it as well. And so if this is a good place to use this type of animation to give the user sort of a real understanding given that we're waiting now a little bit longer, giving the user a real understanding of what's happening, you can sort of underline this sometimes with like text what is, what might be happening, which information is being retrieved, giving the user some more information so that we can keep their attention and given that we are delivering them a bit more information. If you start to take taking longer, obviously it makes sense to become more and more creative with the animations to keep the user occupied in a certain way. Sometimes it works, sometimes it doesn't, but it's definitely worth a try. So if you're aware that your content takes to load much longer, get creative, right? I mean, there's tons of nice examples out there context relevant animations that let the user smile are great in this case, very much for the reason because we somehow at least we want to prolong the period of time that the user is in the maintenance of flow of thought is on the topic and does not like sort of switch up and change context. And I think Dominic had a couple of nice examples that he wanted to mention in this regard. Yeah, I think what's just important to mention is that it's not always bad to make users wait. I mean, obviously there are many, many examples where you do hard work, you do a tremendous job and I think users understand that it takes time. The problem is always when you don't communicate that to the user. So again, I think because it's always easier to talk about real life examples. So let's think about the travel industry and let's think about you walk into a travel agency, brick and mortar shop and you wanna book a holiday to Spain. And so now you sit down at the desk and you say to the guy or girl in front of you, hey, I wanna book a trip to Spain, I need to fly, I need a hotel. And then the guy says like, okay, cool. And then doesn't say anything anymore, just sits there, looks at his computer, types away, no feedback, no nothing. You're probably sitting there and at one point you're kind of like wondering, what's actually happening here? It's getting awkward, right? Now you probably, it takes a bit longer to leave just because you already made the effort to walk into that store. So you don't leave straight away, but I think we would all agree that we would feel awkward. However, if the person would say, hey, hold on a second, I'm just checking if there's a better hotel right now or I'm just checking if there's even a cheaper flight or a flight at a better time, then you would say like, okay, yeah, that's cool. Like they actually care about me. They really care and to get the perfect offer for me. So like you don't sit there frustrated or annoyed, like you're actually sitting there having a good feeling, thinking, yeah, in the end I'll get what I want and I get exactly what I want. And I think the user that visits your site is not much different. So if two examples, shuttle supermarket and booking and you see like, they do a lot of stuff when you request something. So they compare prices of hotels, of flights, they might need to tap into different APIs and it's all fine. Now imagine they wouldn't say anything and it just, instead, that would be a spinner for like two, three, four or five seconds, you would probably think at one point, what the hell is happening here? However, obviously they realize we need to communicate to the user what's going on. So they say like, hey, looking for the best deals or don't worry, searching all the best travel sites so that they actually give you information of what they're doing and why it's worth the wait. Now, okay, obviously you couldn't do that for like 15 seconds or 20 seconds and just say like, yeah, you know, we're still comparing. You know, there's obviously a threshold where it becomes too long. But again, users appreciate your hard work and if you communicate to the user what you're doing, they will understand and they will wait and they won't bounce at a rate that they would if you're just kind of like showing them a spinner and not doing anything for like, you know, 10 seconds. So again, like just kind of like for you to take home is, you know, it's not always bad to let a user wait, users understand this, but I think it is important to communicate each step that you're doing at the moment just to keep your customers in the loop. Right, yeah, and that sort of rounds up what we wanted to share today. We're just tying these two elements together, right? From the one hand, technical objective time and tying that into the psychological time that users go through while actually waiting for the sites, right? And really making sure, you know, even sometimes we'll make improvements, nothing happens, no metrics improve why. Maybe users didn't notice. So we have to play a bit more on all these variables that we were walking through today. So yeah, thanks a lot for taking your time. We know you guys are all busy. Please always feel free to shoot topics that you would like to talk about into the chat and so that we can, you know, keep the content very relevant for you guys. The sessions will sometimes vary going a bit more high level like today then we go in depth again for particular topics. So I'll take back that sort of variation in there. So let us know which topics are key for you and then we do our best also within the other resources that we have within Google here to support you guys. Yeah, definitely thanks for tuning in. And, you know, as Lucas said, post in the comments, I think we're both also quite active on Twitter. So like you can find us there, you know, just shoot us a tweet and usually respond straight away. So yeah, don't hesitate. And yeah, any questions just, you know, let us know. Cool, all right. We appreciate it and have a great rest of your day. Thanks guys.