 All right. Hello and welcome to this week's JavaScript SEO Office Hours. My name is Martin Splitt. I work for the Search Relations team at Google in Zurich, right now working from home office, as you can probably tell. And it is my pleasure to answer your JavaScript SEO related questions. With me today are a few people from the community and people have submitted questions on YouTube. So if you want to be part of this, check out the YouTube channel community tab to see when the next Hangout is happening. I'll also post the link to the Hangout there. And then later on, I'll also post the recording. So even if you can't make it for the recording, it's still valuable to submit your questions. Right. Before I go into YouTube, is anyone in the Hangout burning to shoot a question? Yeah, I wanted to know, is eventually, one day, will Lighthouse become a ranking factor for website issues and so on? I mean, I know you have to pay attention a lot to it. But I just wanted to ask you just a generic question in regards to the page speed addicts. I'm not an addict of page speed insights, but I love the new feel compared to how it changed over time. And I've been watching it from day one, right? So I wanted to know, is there any discussions out there? I know there's a lot of things that you can say. You're not allowed to say, but I mean. It's not that I'm not allowed to say. So there's like three parts to this question in the answer. First things first, Lighthouse and page speed insights are developer tools or actually tools also for SEOs as well as developers to determine how fast the website feels to the user. Now, modeling performance in terms of user experience is very tricky, and you have to find the right proxy metrics. And we're still changing the way that we model. If you look at Lighthouse a year ago or two years ago and now, you'll see an evolution of the metrics and how we mix them to generate the Lighthouse score. It's not something search specific. It is for you to determine and for developers to determine how fast the website feels to the user and where are the low-hanging fruits? Where is the potential to improve this? Now, currently, I think like yesterday, we announced the Web Vitals or this morning or something in the European time zone. We announced the Web Vitals, which are three already existing metrics that we are thinking as the best way to approximate user experience in terms of loading speed. The Web Vitals are made out of the largest contentful paint, cumulative layout shift, and first time to interactive. No, first input delay. That's what the third metric is for. These metrics, again, model user experience. As you know, we are trying our best to give the best results for our users in our search results. I don't really answer ranking questions because, A, I'm not working in the ranking part of things. B, I don't really want to talk about ranking because there's so many other things that you can do to make a website good for your users that will reflect in ranking. Ranking is basically trying to model what is a good answer for this query. So that's the second part. The third part is page speed is already a ranking factor. So that will not change as far as I can. Right, because it's embedded into Search Console and there's a delay in the updates in Search Console for page speed, like for moderate to low speed for every part. Yeah, the speed report is there's a huge delay in that. So that's why I asked. And of course, user experience is the number one thing that every SEO should just worry about. It was just that I wanted to know. So, OK, eventually as Lighthouse will get evolved, like it's going to change even more as time goes by. And then I guess other factors will come in and then they'll become a ranking factor eventually. Lighthouse itself, again, is a tool for developers and webmasters. It's not any way related to Search. What it does, though, is it shows you it tries to emulate a way of determining how fast the website is. And under both lab conditions, as well as if you run it in side by side with the Chrome UX report, then you'll also get real user metrics. These things will become more important for us in the future, not as in Search, but basically as developers and the web ecosystem, we want websites to become faster. So you don't have to rely entirely on the Search Console report any way of you figuring out how fast the website is, especially in terms of real user metrics, not just lab metrics, is important. And I also take to consideration that this is unthrottled all the time. Throttled is much better, checking the throttled connection as opposed to unthrottled. A lot of this kind of shows there's the metrics they change, I mean, because you got to keep on refreshing. Lighthouse in general runs on your computer, so it's not a real user metrics tool. Exactly. Right, right. I just wanted to, yeah. Yeah, so the more real user metrics you can get your hands on, the better. The more lab data you can also correlate with is also helpful. Basically, just build past websites. That's the goal, really. And Lighthouse is one tool to look into how you're doing. Right, my second favorite. Am I allowed to say it on this Hangout or no? Sure. GTmetrics is my second best favorite. And that's how I achieve an amazing results for user experience. So I mean, there's ping-dumb. Plenty of tools, and you can choose whatever you think works best for you. It doesn't really matter. Because none of these tools shows you the reality. They are all giving you different views in terms of what we think of as performance and how we are all modeling performance. And when I say we, I mean the entire world, the entire web community is trying to figure out a way to model speed and performance. And the reports within Search Console and seeing which pages suck and why are people not coming there, that's what I really watch for is the user experience. So yeah. Nice. Yeah. Awesome. Thank you very much for the question. I think I'm going to drag one from YouTube in, and then you'll get the next chance in the Hangout. Hi, Martin. One of the websites then tasked with trying to improve search visibility for is serving the canonical tag via JavaScript. The JavaScript is located in the footer. The canonical appends to the head when rendered. My question is, is that acceptable, or should the canonical be hard-coded within the head? That is acceptable, especially if you use the testing tools like the Mobile Friendly Test, the Rich Results Test, or Google Search Console, and see that in the rendered HTML, the canonical tag is A, the one that you expected to be, and is in the head, then you will be fine. It doesn't really matter where the JavaScript runs as long as it gets inserted into the DOM in the right position, and that would be the head. So as long as it shows up in the head and is the canonical that you would expect, that's fine. Well, isn't that great if there's multiple canonical tags on the page for some reason, or if the canonical doesn't make sense when we are seeing it? Like, if you're canonicalizing to something like, I don't know, a 404 page, then we're like, I don't know. We don't think that's the same thing. But generally speaking, using JavaScript to inject a canonical is perfectly fine, as long as it's injected in the head, which, according to your question, it is. So that's not a problem. OK, that was a quick one, another real quick one. Any advice for best practices when implementing tags using Google Tag Manager? How does it impact speed optimization? Generally speaking, there isn't anything inherently wrong with Google Tag Manager. And oftentimes, it is the best bet you have if you don't have that many front-end developer resources or if you have other needs that can't be done otherwise, then it's a viable solution. It is additional JavaScript, though, and it loads additional JavaScript. So it does have some speed impact, obviously, because it's just another piece of JavaScript that needs to be loaded. So my best practice is, if you can implement it directly in the page or with your own JavaScript that you have to ship anyway, then do that. If you do not have access to developers or actually making changes to your site yourself, and Google Tag Manager is the way that you can actually influence things on the site and implement additional tags and use that. But don't use it if you can avoid it, because it is an additional dependency. It is an external piece of JavaScript that has to be loaded. So reducing that is generally a good idea. That being said, there's nothing inherently wrong with using Google Tag Manager. All right, question from the audience? While I'm scrolling YouTube for the next question. I think I submitted one on YouTube, but I can't find it at the moment. Maybe we can do it like that. Sure. So we are currently building in, or we want to build a React app, and we want to use Pre-Render to serve dynamically rendered content to Google. And of course, whenever you are entering a URL, it will always return a 200, which is because it's a single-page application. Yes. And there is a way for Pre-Render to serve to Googlebot 404 or 301. How does it behave when Google tries to validate? If, or is Google validating if we are serving the same to the user as to Googlebot? Because it would result in having the 404 page rendered to the user having a 200 as HTTP status code, but for Googlebot, it would be a 404 in this case. How is it? Yeah, so the question aims in the direction of, is it cloaking to do that? Exactly. Generally not, unless you do really shady things with that, then it's not cloaking, especially if we are seeing that the page that is served with the 200, okay, is also basically just an error page, then we would highlight that as a soft 404, unless we are seeing it because of dynamic rendering, we're seeing it as a 404 page, then you are in safe waters. So that's not something that you need to worry about generally speaking. Okay, perfect. Awesome. Martin. Yes. Just a quick question. Some websites are using, just watching certain websites out there, and I do send John some stuff for spammy stuff. A lot of this brand is misbehaving with its subdomain. And so it's using its subdomain kind of like for links. And then the webmaster kind of is injecting, you know, like anchor text and so on. I mean, to what point do you guys say, okay, enough is enough? Like if the system cannot detect it. The system is like... So we are pretty much outside of my depth here because that's definitely for the web quality. So search quality and web spam teams. But in general, there's been no follow, right? I'm not sure I fully understand the setup, but if it's spammy and if it's doing things that are considered non-organic linking, then that's something that the web spam team is definitely working towards figuring out. And if you're giving us hints towards where we can see samples of that behavior, then we'll look into them. And definitely if it's something non-organic, then we want to know about these things and we want to catch these things. Right. So in general, if you have subdomains and you're using your clients for links that's against your guidelines, right? I'm not sure how the subdomain plays in it. Generally speaking, if you are using non-organic links and don't tell us about that being a sponsored link or something like that, then that's a questionable tactic. Okay. I don't understand the subdomain part of this. So this is probably a better question for me. There's like a click. They're using it. They're basically taking these subdomains and pasting it on their clients' websites and then using it for. Aha. Right. Yeah, that sounds like classical link spam to me. Yeah, but the thing, the exact example. I even shared it with other of your colleagues and I just, you know, it's been going on for like years and I've been watching it, you know, it just keeps on going on, it keeps on going on. And I don't know, like every day there's new links coming in. You know, we all have to- As I said, I'm the wrong person to talk to. That's a question for John then. Okay. But in the end- I had nothing to do with JavaScript either. So I don't know, out of my depth here. Well, no, it's just being injected internally. Yeah, but I would have to see the example and it's not really my topic. So I would ask John that question. Okay. Right. One more from the YouTube list of things. My website runs the ready-made site builders, something like Wix, but a local company in Thailand since they upgraded their frontend from PHP to Angular, where canonical href meta details are dynamically rendered for Googlebot. I have noticed a huge drop in ranking. Is it because Googlebot can't process all the script properly for the both meta tags and content? And is in GSC excluded URL section. I noticed too many URLs being called are weird URLs with parameters. Generally speaking, this is unlikely the cause of the rank drop. There are many factors and hundreds of factors that go into this. And I don't really discuss ranking to begin with, but it is not impossible that there is some technical reason for why we are not seeing the data. It can be that the site builder is somehow preventing us from crawling some JavaScript resources. It is possible that the JavaScript is buggy. You would have to use the testing tools to see if we see the content that you put in your pages. If we do see that content, then you're at least on the technical side, you're safe. But then you would have to figure out if any of the other things is off, like is the page really slow? Is it doing something weird when Googlebot is indexing it? Is there anything happening that shouldn't be happening? It can also be that their site maps are not helpful because if you say like you're seeing lots of URLs with parameters that you don't expect to be there, then that's probably a problem on their end where they generate additional URLs. And that is not necessarily helping us. And then combine that with a technical issue that there might be. Then you would have very easily an interesting situation at your hands that you need to clean up. So I would talk to the platform that you're using to figure out what they do, how they do it, and how they can help you resolve these things. Without looking at the URL, it's impossible for me to judge what's happening. I have a React static website hosted on React static, OK? On AWS S3 behind an AWS CloudFront CDN, how can I do a redirect, 301 redirect on the client side? There is no such thing as a client-side 301 redirect. 301 is the HTTP status that you're using. The HTTP status comes from the server. I'm not 100% sure I haven't used S3 and CloudFront in a while, but I'm not sure if you can set that up on CloudFront. I guess you could potentially route some URLs through Cloud Functions or something and then actually generate a 301 response. But once you are in user and client land, basically, once you're in the browser, there's no such thing as a 301 redirect. What you can do, though, is you can use JavaScript to generate a client-side redirect. That is not a 301 redirect. Googlebot does follow those and pretends it to be like a normal redirect. So it doesn't really make a difference to us if it's a 301, 302, or a client-side redirect. So that's a way around this problem. But there is no such thing as a client-side 301 redirect because 301 is a server-side HTTP status. OK, one more than then a question from the audience. We have a website with us built on Angular and has a client-side rendering. Well, yeah, which means that when you view the source, only you see the Angular tags and the code and no text. The text is dynamically shown after the page loads. And switch to server-side rendering. It shows the content wrapped around the tags. What would you suggest, client or server-side rendering? I guess according to me, I should go with server-side rendering as Google will require the text content to be crawled and indexed to understand the context. Yes and no. We don't need the server-side rendering for Googlebot because we are also running JavaScript. So if you are putting your client-side rendered version into our testing tools, like move a friendly test of which results test or Google Search Console, you will see that we are very likely seeing the actual content rendered once JavaScript has executed and your Angular application is ready in the client-side rendering scenario. That being said, if you can enable server-side rendering and there is no good reason not to in terms of, like, you know what you're doing and you have that setup tested or you're even better if you're starting a new project and you get to choose which way to go, I would highly recommend looking into server-side rendering as an investment because server-side rendering usually is faster for the users and also works for bots that don't understand JavaScript. So most of the major search engines to understand JavaScript, as far as I know, I can tell for sure for Google I know that Bing is also rendering, I don't know about the others. But things like social media bots, when you share something on social media, they oftentimes do not run JavaScript or don't understand JavaScript and those would be left out. So if you use, like, Open Graph tags for social networks and use them in JavaScript, then you wouldn't see them on these social networks because they don't execute JavaScript. So I recommend looking into server-side rendering, but it's not necessary for Google search at least. OK, time for a question from the Hangout. Can I ask a question? Sure. Cool. We're doing a server-side rendering like the common practice and clearly as we render the page, we have state built on the server-side and then we serialize it and push that down along with the already built HTML to the client. So my question is, when the bot sees that payload, it's like a duplicate content. Everything appears twice, right? Once in the state, like this JSON string, and once within the actual DOM or the actual HTML, is that a problem, like this duplication? No. If it's JSON stuff, especially if it's just application state in the memory, we don't care. We look at the DOM and we only take what's in the DOM. Thank you. You're welcome. Here's an interesting question. Is it OK for SEO to use services such as Cloudflare's Rocketloader? It helps us reduce pain times, speeds, but I'm not sure how Google would read it. I don't know either. I haven't tried Cloudflare's Rocketloader. I haven't used Cloudflare much. It's pretty good. I'm not doubting that. The thing is the easiest way for you to test this is point our testing tools to one of your URLs. If your content shows up, you should be fine. That's all I can tell. I know what Rocketloader is and what it does, but I just don't see why it would inherently be a problem. But I'm never saying something like, oh, yeah, this is fine when I haven't tested it. So as I haven't tested it, I highly recommend you all test this, try this out for yourselves. And if it renders the page fine in Googlebot, then you should be fine. In Search Console, when live testing a JavaScript rendered page, we sometimes see static resources like JavaScript or CSS files. The requests are failed, but there's no details about why they failed. We testing would think, again, sometimes work. What is the recommended way to debug such issues? This is unfortunate, but this is a limitation of the testing tools. Unfortunately. So the thing is, generally speaking, the way to debug these things is look at the reason in the mobile friendly test. For instance, there's a reason field or something like that. And let me actually see if I can bring this up on the screen real quick. I should be able to. Maybe I'm actually getting an error now. That would be convenient, but maybe I'm not. And then I have to deal with the fact that I'm not getting the error. I want to share a Chrome tab. Oh, my screen has gone again. This becomes really annoying that it does that now. OK, here we go. I think this has something to do with the screen getting too warm because the sun is shining onto my screen right now. OK, if I go to some URL, experiments, I can see the features. And if I'm lucky that it's actually, I think features is a bad example because I don't think it loads anything interesting. Interesting. Oh, I know why my SSL certificate for this one has expired. And honestly, I don't really care too much for this page. That's new. Right. Fantastic. I can't show this right now. But basically, is this because my website is somehow broken or something like that? Let's see. Code. Or is there a fundamental bug in? No. OK, so at least the testing tools don't seem to be broken broken. They just didn't like that URL. And I'm not sure why. Maybe I made something weird happen on that URL. No, it's also just, oops, something went wrong. Is this because I'm locked in with that specific account? Maybe I need to use a different account. Can I actually share? It's a good one. I think this is the right window. I think you should be seeing a fresh Chrome window now. Let's try this one more time. This might work. I don't know. Let's try this. So there is a column that says why the request has failed, and oftentimes, interesting. OK. Is it because of the HTTPS? I don't think that should be the problem. Because it's paired with a different URL as well. I'm not sure. Let's see. What happens when I try this again with like a completely unrelated URL? How about, I don't know, this one that's from the previous office hours? Maybe I'm just unlucky right now. Maybe we're overcapacity or something. Who knows? Anyway, so you see the reason why it failed, and there is the infamous other error. If you're seeing the other error column or status code or reason, I'm not sure what we call it in the tool. Actually, that's why I'm trying to run the tool. The other error basically just means welcome to the world of limitations. While Googlebot, when we are indexing pages, is very patient with things and can retry stuff over and over again. So hypothetically, it can take anything between seconds to hours to, I don't know how much, because we are retrying if something goes wrong. The test does not. The test does not have the affordance to actually, oh, status. Thank you very much, Dave. So the status in the Resources tab tells you, ah, OK, fantastic. So I have opened a random website that was in the previous office hours, I believe. And I can now share this with you as well, because it says, actually, finally loaded. So and I think we are lucky because we actually, yes. So this happened multiple times in this case. Am I sharing the right window? I'm not sure. Yes, I think I share the right window. So if I zoom in a little bit, we see here in the status, thank you again, Dave, in the status field, we have other error. If you see this, that usually means that we just didn't have the resources or the time to wait for the resources to load. We have a limited quota of how much we can fetch in one test. So sometimes it just takes you a few iterations. Also, normally, we are caching quite heavily. The test tools are not caching. And that's why sometimes things time out. That isn't a problem. That is not a problem. As I explained, the Googlebot is actually very patient and usually does not have these limitations. So what you can do to figure out if things look fine, as far as you can tell, is you can use Search Console and see if the page is indexed, as you would expect. And you can use URL inspection tool to see the crawled page, which shows you what is the HTML that we saw when we rendered the last time. If you would see that the crawled page has issues, then that's something you want to investigate and make sure that it's not like something like robots takes to you or something blocking us. But also, you would see the error in the testing tools if that is the error case. If you see other error, that is just the fact that the testing tool both does not use caching and B is a lot less patient, because we don't want you to sit in front of your computer for an hour until the tool actually shows you any results. That would be unfortunate. So that, unfortunately, sometimes happens. All right. Before I scroll further on this list, any further questions from the audience? What's your favorite framework? I don't have a favorite. For me, they are tools. It's like asking, what's your favorite tool? And that depends on what's the job that I'm trying to do. I think all the frameworks today are very, very similar. I like to think of it as a, sorry. For me, a framework is basically a bunch of decisions made for me and a bunch of tooling around it and the community around it. So for instance, if I say, OK, I work in a large company environment where we have lots of developers working on the same application together and the teams are distributed and the application is really, really big and there's a lot of interaction with the backend and everything, then I might want to choose a tool that facilitates the successful deployment and development in such an environment. And one of these things that makes, or one of the things that makes these things a lot easier is TypeScript. TypeScript adds type system information to more or less JavaScript. So it gives you a type safe variation of JavaScript or a super set of JavaScript if you wish to say that. And then one framework has that built in. So I don't have to configure it. I don't have to work out how it works with this framework. It's just built in and that's Angular. So maybe use that then. If TypeScript is a big thing for you, then I think Angular is an easy choice to make because it comes as a TypeScript version. Sure, you can use TypeScript with the app. Sure, you can use TypeScript with you, with Ember, with everything. It's not that you can't do it. It's just more effort. Also, if you are working with a team where Java developers from the backend now have to work on the front end as well, then Angular feels a little more familiar, I think, or TypeScript feels a little more familiar. So I think that's a good choice. If you are working in an environment where you don't really need a framework framework, but you need a bunch of components that are mostly like vanilla in a way, then maybe View or React are the better choices. If you really like the View ecosystem, then go with that. If you prefer the React ecosystem, go with that. I don't think there is a best framework. Because they just made different decisions in different spots. For instance, React's biggest thing is that they have a DOM abstraction layer. So React has the ability to say, oh, I just want to compile everything that I wrote into a mobile application or into something that runs on the Oculus Rift VR display, for instance. Whereas the others don't really have that. I mean, there were and are options for this, but what's the point? Yeah, so it's... So you can interact more with React. You can do a lot of integration with React where Angular, I mean, Angular is getting there, right? And also the other thing is, I don't know how easy these interactions or integrations with React really are. Because I heard a bunch of people who tried React Native and weren't as happy. But then again, other people who were super successful with React Native. So I guess it doesn't really depend on the tool. It depends on you, your resources, your project, what are you trying to accomplish and what decisions would you make in the process of designing the system? And then you find the framework that has made more or less the same decisions. That's how I would do it. So I know in my, you're welcome, in my previous job, I was tasked to figure out like, which framework we should be going with as we were rewriting our application. And I just sat down and talked to the developers on the team and figured out what are their preferences? How do they think about systems? How do they approach building web applications? And then I made a decision based on that. And I figured out like, okay, so pretty much all the frameworks are an option for this thing, which other decisions that we would like to make and then which framework mirrors that. Cool, back to SEO. Are there quotas that Googlebot uses when crawling JavaScript render pages or sites? So crawling has not much to do with rendering. That's two different phases. Crawling is just, we give a URL to the bot and then the bot makes an HTTP request to it. That's where crawl budget comes in. And that is kind of like how many requests we can do in parallel. And these requests also include things like API requests and JavaScript file requests and all that kind of stuff. So it does have a little bit of an impact but not as much as people might think, especially because caching is involved. I wouldn't worry about it too much unless you have a really, really, really big site with lots and lots of JavaScript files. You might want to consider bundling and also tree-faking a little bit and maybe also looking at code splitting at the right spots but there's no such thing as like a render budget or a JavaScript budget or anything like that. There is crawl budget for crawling specifically though. But that's also true for non-JavaScript websites. Does Googlebot still recommend using snapshot tools like Rendertron for serving websites built with modern JavaScript frameworks? No, it's a workaround. You can use these tools and Rendertron is still actively being maintained as you can see Tuesday and Thursday when I do the Twitch streams where I actually do maintenance on Rendertron but it is not necessary for Googlebot and we consider it a workaround because if you have JavaScript that causes interesting situations in Googlebot, it probably also causes interesting situations for users so you either have to fix that or actually possibly consider a server-set rendering as an alternative to dynamic rendering was and is a workaround rather than a long-term solution. And I think that's it for the YouTube questions. So more questions from the audience here. Hey, Martin. Hi, Mihai. Just actually a quick follow-up to what you just mentioned. So just curious if you have, I mean, we've worked with a website that only part of the content is being generated asynchronously through JavaScript. So it's an e-commerce website and the product listings from on category pages are loaded after the initial page load. It works very fast. It shows AppleK in the URL inspection tool. Should they do anything special if everything seems to work fine as it is? No, I wouldn't fix what isn't broken. Okay, so no server-side rendering or anything special will work. Unless you have a good reason to, I mean, as I said, like server-side rendering, if done right is fantastic, but it's also an investment. If you don't want to have that investment made or if it's not a priority and your website is fast already, then you're probably not winning as much. So I would not change something that works. But these being category pages, is there a risk that Google might take a bit longer until it finds the links to the products since the products are loaded after the initial page load? So you have to then discover the links. So we can only do the link discovery after rendering then, but normally that isn't a problem. For 90% of the pages we actually, or URLs we're actually seeing a render time, well, a render queue time within minutes. So like you would only probably shave off a few minutes and usually the investment into server-side rendering for the existing project is quite substantial. So I don't think you're going to win much really. Right, right. Okay, cool. Good to know. You're welcome. All right. Oh, we have a bit of time left. That's nice. So more questions? Would you recommend a shared server or having a dedicated server? That depends on many things. It doesn't make a difference for us. It can make a difference for you if you notice that your performance isn't great. As in like server-side performance isn't great if things take longer to save into a database, for instance, that will mean that users may be ordering something from you will take longer until they have the order confirmation. If the hoster doesn't balance things properly then the server might go down because someone else is having a lot of traffic. So like there isn't fundamentally anything wrong with a shared server, especially from our perspective. We're just going to do as many requests as we can reasonably do and want to do. And there isn't an inherent up-side or down-side. We don't want to rack up more expenses. It doesn't make sense, you know, that we go from a shared to a dedicated, but then you show them the slow speeds and then you explain to them that, you know, it can't be a ranking issue. With slow speeds, you have to be careful though. What is slow? Time to first buy it being slow or is it that their server process takes too long or is it network being slow? The website and the server. Right. The website and the server. And then there's a lot of people in shared server, there's like. Right, so the server responds very slowly. Extremely, yeah. Right, because that's the thing, like you want to be very careful that you make sure that it's actually this, like if you ping the server, it's really slow. And if you make network connections to the server, that's where the time is spent. You can see that in the network timeline, where time is spent. If their server is really slow, it's fine for them to not care about it. Generally speaking, it is one ranking factor, but it's not the most important one. And it's also not the only one. So I wouldn't argue that that's necessarily time best spent. Making the website faster, as in like the client side performance of things that separate from the server question, if the client side performance of the website is in like, once things have arrived to the network, it takes a long time to actually show things to the user. That's something that they probably want to fix because that's something that drives people extremely mad. A slow server can be oftentimes compensated with good caching. So that's something they probably want to look into. And only if really the server gets very, very slow and also unreliable, then that's something that I would bring up with my hosting provider. And maybe they can move you to a different part on their side so that you're still on the plan of a shared server, but you're on a less busy server, on a less busy physical instance, if they don't want to rank up or rack up more expenses. And then they have to figure out like, where do they want to spend the money? Yeah, that it's also a possibility to use a CDN in front of their server so that more things get handled by the CDN instead of the actual backend. There's ways of working around slow servers without having to get a separate dedicated server for more money. Okay. Yeah, no, of course. You got a many options for CDNs out there. Okay, thank you. So that's something that I would look into before I strongly recommend switching servers or providers. Awesome. Any other questions? I think there was someone else trying to get a question in. Yeah, it's just a quick one. In some ways, I suppose it follows on from Meow's stuff. With things like Nuxt and Nexon, you've got this nice hydrating server side stuff. Occasionally there'll be components people use that are only client-side or only work client-side because they don't want to fix things, things like that. Is that really a problem as long as you use the testing tools and they show? Is there any kind of panic where you're sending out something then hydrating parts of it? No, I think that's fine. Yeah. That's fine. Again, especially hydration is a really nice thing because you get best of both worlds normally and if it takes longer to hydrate, that's okay, I guess. There's no inherent issue. It might, it depends. And I would very carefully use the test tools to see if we are actually seeing what we need to see. And also things like Lighthouse will show you if there is things like layout shifting and all that kind of stuff that also is not great for user experience, but unless you see something wrong there, stick with it, that's fine. Just to follow up on that as well, do you know much about the CLS score, the thing that? I haven't really looked into it yet, I have to admit, but I think it makes perfect sense as that is a factor that is super annoying if you experience it as it would be served. The amount of things that popped up on my phone just because I clicked or tapped on something and it actually moved while I was tapping, frustrating, I tell you. So I'm really excited to see more work around these things. And it's also very impressive to look at how the metrics have evolved from years ago. We looked at page load time. It's like, what is a page load? Is it the time in the HTML is completed? Is it the time when JavaScript has completed? But what is, and yeah, it's quite interesting. And we'll see more in the future, I'm sure, because that's an ongoing challenge to better understand and model what performance means to you on the web. Awesome. You think it's important? Do you think it's important as well when people are looking at things like performance, they take into consideration what their site is because something that's very interactive, FID might be more important, something that's just an article, something more like LCP might be more important. And that's a huge challenge. That's true, that's the huge challenge. On one hand, you want an easy to grasp metric that makes sense if you just look at it in a glimpse like the Lighthouse score. But at the same time, you can't break it down into a single store because you might think like, actually, fun fact, this is a thing where users only come in once and do the thing once. So I don't really care if we have a service worker because they are not coming back normally to this website. Or, well, actually, we kind of don't care about our first user journey or first-time visitor because the first-time visitors come in through these other landing pages, which are really fast. And then they log in and then they get the service worker. So it is really, really hard to model this and really, really hard to balance both needs, which is why when people ask me about the Lighthouse score, I'm always like, see it as a smoke test if your Lighthouse score is five. You definitely have work to do and you should get to it as quickly as possible. If your Lighthouse score is 95, sure, there is a bit of like shaving the yuck, as we like to say. There's a little bit of like turning knobs and fine-tuning dials there, but I'm not sure if it's gonna buy you as much as you think it does. And if you're somewhere in the middle, then it depends on where is the problem. As you say, like if you have a highly interactive chat app and every value is fantastic except first-time to interactive, if your FID is first interactive delay or sorry, first input delay, if the first input delay is really, really high, then people will get frustrated trying to actually interact with the crop and they can't because the website's still loading. It doesn't matter how great the other values are. If that's a blocker in your application, then you have to address that. Whereas a website like Wikipedia might not care as much about the interaction delays because they're like, well, we just have to get content for the reader to actually observe and read as quickly as possible. So we care more about the largest contentful pain. So, yeah, it's not easy to express all of these differences in a single score. So take your Lighthouse scores with a grain of salt, which doesn't mean that you should rationalize having a Lighthouse score of five. That's very likely. It might also be that Lighthouse had an issue understanding your page, which you can then say like, it's not a performance problem or is a performance problem? Mm-hmm. Not easy to break this down. Now, so is 5G gonna make things easy? Sorry? Or is 5G gonna make things easy for JavaScript or worse? Looking at how it has been evolving so far, no, I don't think so, because the tendency is if you look at things like, we have so much faster machines these days, yet software hasn't gotten faster. It still takes ages for me to open some websites. It still takes ages for me to open certain applications, even if it's native applications, where I'm like, how does this happen? We used to have this fast and relatively well-working, even if the UI didn't look as fancy. This was fine 10 years ago. This is still fine, but not great. So like, mm-hmm. So I think a faster network speed will probably only invite different use cases and use cases that we probably haven't seen today, maybe like more vet VR, so that we're all doing our Zoom meetings. Instead of in Zoom, we do them in VR in the browser, which is fantastic, but also means that we have to move a lot more data. So then that eats up the more bandwidth and the more speed that we have. And, sorry for that. And so I don't think that more bandwidth and faster mobile speeds will fix these issues inherently, because we tend to just use the bandwidth more than. Right. Hopefully I'm wrong. That would be fantastic, but I have a feeling that it won't be so easy. All right. Anyone else having a question? I think it's a good time for the last question. There's one more on the YouTube submitted questions, but I'm not sure how to answer this one because it's a very vague question. In what aspects should I be aware of, what aspects should I be aware of if I want to change my website by applying single page with Injects? I'm assuming this question is about building a client-side rendered single page application, at which point my suggestion would be, consider a higher level framework, like NUCs or NEXT or Angular Universal or any of the other frameworks available that have surface-side rendering built in, because surface-side rendering usually means a better user experience and it's usually a lot faster. That being said, there are a few things that you want to watch out for. Definitely test your implementation with our testing tools, making sure that we are seeing the content in the rendered HTML. And yeah, there are a few things. Check out our documentation on developers.google.com slash search and in the guides, we have a bunch of guides that give you hints and ideas. Or check out the JavaScript SEO series on our YouTube channel. Good. Thank you very much, Martin. You're very much welcome. You're helpful, Leo. Awesome. I think we have time for one last question for real this time, if anyone in the audience has a question. Hi, Martin. Hi. This is Rohan from Search Engine, Germany. I would like to ask one question about the service worker. We had implemented service worker to serve the WebP images instead of using the picture tag with the HTML blog, with the source tags in it. So actually, there is the action we do. We check through the service worker if browser does for the WebP. We change the image format to WebP. We are fetch API and return WebP image in response to GPEG or PNG. But when the images are opened directly in the browser without the HTML WebPage, we do serve correct format like PNG, so is PNG, and GPEG, so is GPEG. So actually, WebP are served only when the images are fetched from HTML WebPages. So may this cause any image ranking issues or problem issues? I don't think this is going to cause any specific issues with images, but it also means that we are not seeing the WebP format because we are not running the service worker. So we won't see the WebP file format. But when I check via Web.dev or PageSpeed tool, it does see WebP. Because that has nothing to do with Googlebot. OK, Googlebot doesn't see it. Googlebot doesn't see it, yeah. OK, gotcha. Thank you very much, Martin. You're welcome. All right, ladies and gentlemen, thank you so much for joining this week's JavaScript SEO office hours. I hope that this was helpful and entertaining a little bit as well, especially unlike the discussions that creep in every now and then. We will be back next week. I'll post in the community tab on our YouTube channel. This recording will be up soon-ish. And thanks so much for joining. Have a fantastic time and stay safe and stay well and healthy. Bye-bye. Thank you. Bye-bye. Cheers.