 It's great that we have you all here, and that you all gathered today to learn more about JavaScript and search. And the coolest thing is, today, I have the pleasure of not just being the only one presenting this, but I have Zoe with me, who is working on the rendering team. So she is, yes, she's the one from the team that made the new Googlebot happen. So please, like, cheer and give another round of applause. Oh, well, it was a team effort, Martin. Thank you. And it's great to be here. Martin, I know you've done some work around JavaScript and search. I'd love to discuss with you if you have some time. Actually, I have, like, 40 minutes until I have to do the next thing, so let's talk a bit about it. But before, I would like to go into something else. You know, I talk to SEOs and I talk to developers all around the world, and it's great to do that. But I keep getting this sentiment, especially from SEOs, that JavaScript is, like, the worst thing ever, and it's the new flash, and it's evil, and you can't have a successful website if you're using JavaScript. So oh, oh my gosh, Martin, that sounds a little bit extreme. I think both developers and SEOs should work together to make sure their web content works well. JavaScript is a tool, and it can be used to make great websites, but every tool can be used incorrectly and do harm. That is absolutely true, and I think that's such an important point to drive home. I mean, it is an essential part of the web. It is what enables so many new capabilities on the web. At the same time, if you use it incorrectly or if you overuse it, that's a very, very bad thing. And I mean, we see that with things like this, right? We see the graph from HTTP Archive. We see that the amount of JavaScript that we are shipping to users loading our pages is growing and growing and growing over the years. So what I'm wondering is, with JavaScript being an essential part and a good tool if used right, but JavaScript being so much in use, how does Google Search deal with that? Well, we keep improving Google Search for JavaScript sites. At the same time, we make sure we keep crawling and indexing efficiently. That sounds very nice and airy, but do you have any examples for that? Oh, yes. OK. So let's start with the big news. You know Googlebot used to render pages with the engine from Chrome 41, right? Yes, the number one question I got and the number one question I asked you over and over again, it was like, hi, Zoe, long time no see. So Googlebot update, nor Chrome. So we're getting this update? Yes, actually, we already got the update a couple of days ago. So this means we haven't just updated it and we will start crawling your sites with it. We are already doing that. Yes, that's correct. So I somehow like I nearly missed the announcement on Tuesday. That was a little weird, but I think this is fantastic and this is amazing. And just to make sure to get that absolutely right, this is the latest stable Chrome rendering engine. Absolutely. Cool. So that is good news, but what I'm wondering is what does that mean for me as a web developer? Well, it gives you access to over 1,000 new features that is now available to Googlebot when rendering pages. That's really, really cool. I mean, that's not 1,000, but if you look around, you'll see that there's tons of new stuff. So that's great. So we have a modern web rendering service in Googlebot. That's lovely. But I have another question. Do you know what my next question is going to be like? Oh, your next question? Are you wondering if we'll get like four years out of date again? Yeah. So when do we get the next update? Actually, we'll update the render engine regularly. So we'll be more or less around the same as desktop Chrome. Give or take a few weeks. A few weeks. Something like that. We're not making any promises, but that's the idea. A few weeks. That's amazing. I love that. That's really big news. All right. OK. Let me get that straight. So modern JavaScript is the longer blocker for SEO. Absolutely, yes. And what kind of new features are these? Well, you can use pretty much any modern JavaScript feature, including let, const, a whole bunch of new array methods, and so on and so forth. Right. But that's JavaScript language features. What about new APIs? Yes. Many of those are available, too, like Intersection Observer. I hope for that one. So that's a fantastic news. And that probably also means if we have more of the APIs available in the browser, that I don't need as many polyfills as I used to. Right, right. You should still consider providing polyfills for older browsers. But if you have a polyfill that exists purely for Googlebot compatibility, you can remove them. Yes. So I think one of these examples would be I know that one of the JavaScript frameworks shipped polyfill for array.fill as an optional thing for Googlebot. We can, like, kill that. That is amazing. You know, that's great. And these ongoing updates will probably also mean that we update the user agent eventually, right? Yes, eventually we'll update the user agent string to reflect the version of Chrome that we're running. That's cool. So right now, we are telling people of 41 because we don't want to break sites that have it pre-coded, hard-coded. But we'll do that update eventually. That's great. That's amazing. So what else will the future hold? Speaking of the future. What else will the future hold? We're going to update the testing tools, too. We didn't get around to updating many of them yet. But look forward to that in the coming months for them to run Chrome as well. Right. So unfortunately, that means we wanted to make sure that we get you this upgrade as quickly as possible. But that means that we might have some rockiness on the way. But look at it from a different perspective. This means that if the testing tools we provide you with mobile-friendly tests and a URL inspection tool, if they say it's fine, it will definitely be fine. If you get JavaScript errors, you still have a chance to actually make it into indexing. But you have to double-check things. And I think that's OK. You're working on that? Yeah, we're working on it. Sorry for the glitches in the meanwhile. Things like that happen. I mean, there was a huge update. You worked on this for years. And now we finally have it. And I think it's great that you give this to developers and SEOs out there. You can now run modern JavaScript in Googlebot. And I think that's the real update to focus on here. And with all that said, I don't think this is a silver bullet. What I'm wondering is, as someone who's building websites, should I look at something else? Like, can I do something to help everyone on the web? Why, yes, Martin. You should keep your users in mind. Fair point. That's a good point. And look, this example page, it has quite a bit of loading time. It takes a while for the cat pictures to load. Yeah, I mean, come on. It's lots of cat pictures. And I guess, like, for cats, you can wait a little bit? Yeah, but maybe we can do a little better. The browser streams the HTML as it arrives. But there isn't much that can be displayed right away. We got some script tags, but the body is empty. And the browser won't discover and display the cat images until JavaScript has been downloaded, parsed, and executed. Right, right. So what you're saying is, sure, I load a lot of images, but it could be so much faster. And the browser gives me the capability to do that faster for free. And I'm breaking that. Yeah, and it may hurt your users. Isn't there some way to use your JavaScript framework to produce HTML and ship that to the browser up front? Good question. And the answer is yes. But that brings us into this whole new world of web app architecture. So I think we need to talk a little bit about terminology. So I think the first thing that you want to probably look at is universal JavaScript. Universal JavaScript, that's the same thing as isomorphic JavaScript. Yes, because we don't have enough terms, we have to invent new ones. So it's like universal JavaScript, server-side renderable JavaScript, and isomorphic JavaScript. But it's pretty much the same thing. Cool. And that's just JavaScript code that can run either on the server or in the browser, right? Yes, absolutely. It's kind of nice. So JavaScript itself is a pretty agnostic language, but there's features that only work in the browser, and then there's features that only work on the server. And you want to make sure that you use the subset that works on both sides. And that enables us to do server-side rendering. Ah, yes, server-side rendering. Like the old in PHP days, when the server gets a request, run some code and generates the HTML that is sent to the browser. Exactly, except that we are not running PHP. I mean, you can, but we are talking about running the JavaScript from your framework code basically on the server-side and generating that. And what else is there? So server-side rendering has a few, well, not necessarily drawbacks, but a few characteristics that you want to not forget about. So one thing is it doesn't really give you the quickest response because you have to run it every request. So you can also do pre-rendering instead. Pre-rendering. That happens when you make changes, like at build time or on deployment, right? Absolutely. So basically, instead of doing it on every request, you're only running the JavaScript and you know that the content has actually changed. And that's useful if your content doesn't change that often. So it could be useful on stuff like landing pages, blogs, and so forth. Oh, absolutely. I mean, those are fantastic candidates for pre-rendering because when you have a blog and you know this content isn't going to change until I make an update, until I publish a new post or edit an existing post, then you can run the pre-rendering at that point and just be a static website the rest of the time. Cool. And by the way, I think I heard another term at some point. I think it was hydration. What's that? So hydration is the thing that every human should do properly. You should drink enough and keep yourself hydrated. But in this case, actually, it means something else. It's a variation of server-side rendering because what we discussed so far, pre-rendering or server-side rendering, the pure version just generates static HTML sites. Yeah. And just generating static HTML wouldn't be quite what I want if I'm building an awesome single-page web app. That's the thing. If you build a highly interactive app, you want this interactivity to still be there while the initial content should also already be rendered. So basically, yeah, that is what hydration is about. I see. So it sends the HTML to the browser and then upgrades it in the browser once the JavaScript is downloaded, parsed, and executed. Exactly. So that gives you this nice dynamic nature of the website even if you have the initial content in HTML already. And let's look at how that would look like in a project. So for instance, if I'm using React, I can use something like React Snap. It starts with my home page and then crawls all the links it finds and then generates static HTML for that. But that's pre-rendering. And unfortunately, if you have a regular React application, what you might want to do if you want to hydrate stuff as well on top of it, that means that you also need to do some changes to your code. So again, this is the main application code. You have to make this change that if you see that the root element has already some child nodes, then that means that we actually are in the pre-rendered HTML in the browser and then the app hydrates. But if it doesn't have anything in there yet, then we are on the server and then we need to basically to generate the initial HTML. And is there some other way to do this with React? Yes, especially if you start a project from scratch and have no old legacy code base, then I highly recommend taking a look at Next.js, for instance. I mean, it does require some changes, but it is definitely worth it. But I don't want to rewrite everything. OK, that's a fair point. So the thing is when I said Next.js, Next.js is like a higher-level framework in the sense of it uses your React components, but it gives you helpers for server-side rendering and hydration as well. I see, I see. So, right. This is that, but you might use other frameworks, right, Zoe? What is your favorite framework? Oh, oh, my favorite. I would say. My favorite framework is JavaScript, Mark. OK, well, yeah, vanilla JavaScript, that's a fair point, I think. But if you would be using other frameworks like Vue.js, for instance, you actually have options as well. You have three options for Vue.js. And the first one is the Vue.js server renderer, and that gives you SSR plus hydration. That's kind of cool, you know. And that. But you still have to rewrite a lot of your code, right? Yeah, well, you still have to have universal code in your Vue.application, so it's unfortunate. But, well, if you don't want to do that, there is another option. I think the second option is called the pre-render spa plugin. I like that because I want to be in a spa sometimes. Now, it's like SPA plugin, so it basically pre-renders your page like we saw in React Snap, but it doesn't give you hydration. So this only gives you pre-rendering and not the benefits of hydration. So what if I start a new Vue project? So similar to what we said about React, there's a thing that you should look into if you're starting a new Vue project. And I think the thing that I would recommend you look at is Nux.js. Nux.js is inspired by Nux.js. Surprise! The name is one-letter difference. It does pretty much the same thing as Nux.js for React, but it does this specifically for Vue. But it says pre-rendering right there as well. OK, yeah, you caught me. So it does SSR and hydration, but it comes with a command line interface that you can install. And then you can say, OK, this page, and this page, and this page, I actually want to pre-render as well. So you get to mix and match a little bit. I see. So you can choose one or the other. And what does the server-side code look like? Well, the server-side code depends a little bit. But let's say we use Express.js, which is a pretty popular Node.js server-side code framework or library. The first step is you create the Vue server renderer if you would be using the Vue server renderer option. I see that it creates a renderer with an HTML template. What's all that for? So the HTML template allows you to have the site-wide data, like the general HTML structure, and basically just the Vue application that runs just gives you the content inside that template. I see. And does this template allow interpolation of page-specific data, like the title or meta snippet? Title and meta snippet, yes. So in the next step, whenever a request comes in, because we are now not pre-rendering, but we are doing accesses R plus hydration, when the request comes in, we can specify the page-specific data, and that gets replaced in the template as well. I see. So that will produce a string of the template HTML together with the HTML generated from Vue.js, I guess. Correct, exactly. And once we have this HTML in the string, we can give it to the browser, and that's what it's about, really. Nice. And that works with your Vue.js components? It does, unless they're not written in Universal JavaScript. So you would still have to make sure that your JavaScript also runs on the server. You see a theme is evolving here, right? By Universal JavaScript. So we still have to make edits to make sure our code is Universal JavaScript. But what about Angular? How does it work? So for Angular, it's pretty much a similar story. The project is called Angular in Universal, and it does SSR plus hydration as well. It comes with a bunch of stuff. So if you use Express.js, for instance, it comes with this ngExpress engine that you can use, and you pass in the app server module ngFactory, which is a really easy thing to say. And yeah. So the ngExpress engine runs our Angular app code on the server and generates the HTML we can serve, right? Yeah. It's part of the Angular Universal, like all the hydration code and that kind of stuff piles on top of it. And you have to make some code adjustments, OK? But basically, you get SSR and hydration for Angular that way, and it's pretty well supported. But again, without Universal JavaScript, that's not a good thing. That was a lot to take in. Is there somewhere to learn more about all this stuff? Funny that you asked, because we happen to have made this YouTube video series that's called JavaScript SEO. If you haven't checked that out, go to youtube.com slash Google Webmaster, subscribe to the channel, and check out the JavaScript SEO playlist as well. Cool. Now, we have been talking quite a lot about server-side rendering, server-side rendering, real hydration, and pre-rendering. But last year at IO, I remember that John actually said something about dynamic rendering. So I think that dynamic rendering was this thing that you can use to pre-render, but only do that for bottom crawlers and SEO. So does that now mean that we have the new Googlebot after all? Does that mean that we can ditch that and it's going away? Well, we want site owners to focus on the user experience and content quality. For the time being, dynamic rendering may be useful, but it's not a long-term strategy. Right. But like, so what would be use cases for dynamic rendering then? First and foremost, it's a workaround. Unfortunately, not all crawlers and bots support JavaScript right now. Right, like the social media crawlers, most of them don't run it, and some other search engines don't do that well as well. Right, OK, gotcha. Right. But I think it could also be a stepping stone. Like it's a workaround, but it can be a stepping stone towards improving your website, because server-side rendering and hydration, that's a bunch of work that you have to do, right? Yes, yes, but remember that server-side rendering and hydration bring the great benefits to your users while dynamic rendering, not so much. Right, and that's because dynamic rendering basically only improves your HTML content coverage that basically is in the first stream that goes to the browser, only for bots, not for users. So users still get the slower site if they are unlucky. Yeah, that's correct. So consider dynamic rendering a workaround until other bots have caught up, or you have upgraded your website to server-side rendering or server-side rendering plus hydration. I think that makes sense, yeah. Phew. That was a lot to take in, wasn't it? That was like so much stuff in so little time. But I think it actually gives you all a foundation to build from, and I think that's a very, very important thing to do, isn't it? Like, do you like what you just got? Yes? But wait, there's more. So Zoe, I have got you on stage. I was wondering, now that I build sites and consider SEO, can I also test what I'm building and if it's actually working properly? Ah, well, good question. Let's take a look at a few tools. OK, fair enough. A great first stop is Lighthouse. It's right in your Chrome DevTools. Come on, I know that one. It has the performance audits. It has the practices. It has accessibility. But what does that have to do with SEO? Well, it's also got some SEO audits as well. And they help you improve your website's discoverability and make your users happier. Let's run the audit, and there we go. Here are your results. This is not looking good. I mean, whoops. OK. Yeah, 44 points. But no worries. Many of these are low-hanging issues and easy to fix. The best candidates are probably the title and meta description. Right, yes. Because if you use JavaScript frameworks, you often forget about these. And it's not always quite obvious how to make this happen in JavaScript frameworks. So let's look at React, for instance. React is a great framework. And if you use React Helmet, it's even better because you can do all these things. I React Helmet. Martin, I'm building a website, not going on a bicycle tour. What's that React Helmet for? You should go on a bicycle tour. It's quite nice weather outside. But similar to a bike helmet that might not look fantastic, but is really helpful, it protects you from not looking great in the. I mean, bike helmets oftentimes don't make you look great, but they make things work out for you. So it's similar to React Helmet. React Helmet allows us to add to our template and basically use any props in our component to set the title and the meta description to something that is specific to the page. Let's say I have a product detail page, right? I could take the product name and put it in the title and then some description for the product into the meta description. That's going to look fantastic in search results. Great, great. So this works for React, but what about other frameworks? Right, OK. So the other frameworks, Angular, for instance, has it built in. They're called the title and meta service. And they do the exact same thing. Got it. So here we have the same thing. We take the title and the meta description and put something that is in our component into them to show what's on the page exactly. And now that's Angular and React. You might be asking next, what is about ViewJS? ViewJS has an extension called View-Meta. It gives you this additional method in your components called the meta-info and it does the same thing. And so that's it? Oh, actually for ViewJS, you want to make sure if you use the ViewRouter to switch to history API mode. History mode? What's that for? So it defaults to the fallback mode, which uses hash URLs. And hash URLs aren't exactly great for crawling. I see. So maybe only the home page would be indexed. Yeah, because we're not really crawling this fragment URLs, right? We said that last year in the IOTalk as well. So that's no good. Speaking of no good, Martin, Googlebot only sees the placeholder images on your website. Wait, hang on. That's weird. It works in my browser. So why does it not work here? We have a fresh Googlebot and everything? Yeah, yeah. Well, do you maybe use lazy loading images? I actually do. And I do that like this is the code. So this should be fine, right? I mean, whenever you scroll, I figure out if the image is within the viewport. And if that is, then I load the actual image. So that should be good. Well, Googlebot does support lazy loading. But keep in mind, it may interact with the page differently from your users. In your case, Googlebot doesn't scroll, so the scroll handler isn't triggered and the lazy loading contents aren't loaded. Right. But I guess what I should be doing is I should use something else, like intersection observer maybe? Yes, you could use an intersection observer. This is an API that triggers a callback whenever any observed element becomes visible. It's more flexible and more robust. So I can use the same code to still load my images, but now it's more robust thanks to intersection observer. And that's supported in modern Googlebot. That's amazing. It is. So I wonder, can I also use the new native lazy loading that Chrome's getting? Absolutely. It's still an experimental feature, but worst case scenario, it gets ignored. So feel free. Ah, so that's pretty cool, because it's just this additional thing that we just ignored and it just works. OK, that's cool. That's nice. So you can't run, which is nice. So I can ask you another question, I guess. Is it OK if I ask you another question? Oh, yeah. Go ahead. So I was using Search Console earlier today, and I noticed that we have this weird thing called soft-for-force. Ooh, soft-for-force. No, sorry, not soft-for-force. We have this issue where my pages, I basically update my website, and then they get crawled, but not really discovered, but not crawled yet. So that's the one that I'm having struggles with. What does it mean to be discovered but not crawled yet? Oh, it means Googlebot hasn't indexed your page yet. Perhaps your site's crawl budget is too low for those pages to be indexed. What's crawl budget? Well, crawl budget is a rate limit on how frequently Google can crawl the site. Oh, could you go back to the slide? Oh, sorry. Right, let's stay here for a moment. So yeah, so it's how much Google crawls my site. Yeah, yeah. And we do this to be a good web citizen and avoid knocking over web servers with too many concurrent requests. Ah, OK. Is there something else that I should know about crawl budget? Well, crawl budget also applies to resources loaded from the page during rendering. What do you mean by resources? For example, JavaScript files loaded from script tags or XHR or fetch requests triggered by JavaScript. Those count against my crawl budget? Yeah. All right, OK, so I need to make sure that I'm not making too many requests, I guess. But one more thing, Martin. One more thing, we don't have forever to perform a render. And staying within the crawl budget can be tricky sometimes. To help with this, we cache certain sub-resources across renders. So that means that if I have multiple pages and they all use the same libraries or the same code, you don't fetch it over and over and over again. Correct, Googlebot might just fetch it once and then reuse it. OK, that's pretty nice. So what do you do if my site has an infinite loop? Is it OK? Is it going to be indexing what it's got so far? Yeah, yeah. If your site has an infinite loop, then at some point, we'll cut off the JavaScript and cut off any fetching that might still be going on and return the contents that we have up to that point. That's pretty cool. So that's nice. Is there something else that I should know? Yes, yes. Did you know that we put URLs we discover into a queue and crawl, render, and index them separately? It's not like Googlebot is clicking on links and going from page to page. Damn it. So that means that the render queue is still a thing. We haven't removed that even with a new update yet, right? OK, right. But what if I use, let's say I might have a website that requires you to set a cookie and only if the cookie is set after the set time, we actually show you the content. Would that work? No, that would not work. Googlebot does not store cookies across requests. So it always sees the page like a signed out user would. Right. OK, so it doesn't do that with cookies, but what about the other ways to persist data across page loads? Would you use local storage, session storage, or index DB? These APIs are available, but like cookies, they don't persist data between renders. So they're basically only available within a single render pass. All right, so that means I can't persist data across. OK, right, fair enough. But what about other things like service worker or cache API? Well, currently we don't support the service worker API. But if we did, every render would start with no service workers installed. Oh, right, because basically someone coming from search results is pretty much a first time user, so we can't rely on that. That makes sense. I think that makes sense. So in that case, whenever I use some feature on the web, I need to feature detect if it actually is present. Ah, yes, feature detection is a great idea. Progressive enhancement is a good strategy where different users might be on devices or use browsers with varying capabilities. But you should make sure and also handle the not so happy path. Oh, wait, so what you mean here is error handling, right? Yeah. I need to make sure that my code does that properly. So let's say, ha, let's say there's something like this right here. The idea here would be that I have localized content. So I check if geolocation exists. If not, I just load the fallback content. I think that's fine, right? And then when it exists, I just use the current position to load the localized content. Is that fine or what happens if there's a mistake? Well, imagine the getCurrentPosition function throws an error or an exception. Then your web page's contents would not be shown to Googlebot. Because the geolocation feature is available, but an error happens. Yes, this canon does happen to Googlebot. I've seen it more than once. OK, when? What kind of APIs could be affected? For example, I think this is in another slide, but WebSockets. WebSockets, all right, but also things that require permissions like mobile microphone, webcam. What do we do with these things? We deny all permissions. OK, I think that makes sense. After all, what would the webcam picture look like for Googlebot, like a server room or something? That would be weird. So I think to fix this, what I could do is I could basically just have an error fallback that also loads the fallback content. Yes, and making sure you treat these error conditions appropriately is important. You don't want to leave the mistake of leaving Googlebot or users without contents. Yeah, that would be a mistake. Oh, by the way, you're going to love this because this keeps happening to me. I noticed something in our performance report and search console. So we have the support page, and it is great, but it keeps losing impressions and clicks on the search results. So this is not exactly great, but I don't understand what's happening here because our code is perfectly fine. The site is on Google, it's indexed, and all the content is there when I test it. But there's something weird going on because we figured out that it's not actually fetching the data from the API, but we're not sure why. Ah, well, would you mind showing me the code, Martin? I mean, we are amongst us, so surely. Here we go. So our support bot basically opens a web socket connection and then uses the connect event to show a bunch of initial FAQ data and then give you a welcome message, and then it starts handling the chat code. This is what I mentioned earlier. And what I was worried about when you said your requests don't show up, we don't allow web socket connections in Googlebot for architectural reasons. Oh, OK. So that explains why we're not seeing the data. I guess making that static content and maybe even use the new FAQ structured data is probably a good idea. Right. That makes sense. So I have one more question. I have some time for that. Yes, we have a few minutes left. We have a few minutes left. So basically, I noticed that we have this soft 404 problem, and I'm not so sure what that means. Oh, a soft 404 means you gave us a 200 HTTP OK, but the contents look like an error page. OK, yes. That does happen in my single page application because my server doesn't actually know all the different routes and products that we are serving. So what we do is we give you the index HTML and then the JavaScript runs and figures out if that thing actually exists. And if not, it shows an error, but we can't change the HTTP code anymore. So what would I do in this kind of case? Well, you've got a couple ways out of this situation. You can either redirect to a URL that returns a 404 from the server, or alternatively, you can use the MetaRobots tag with no index. But why do I even have to care about that kind of stuff? Can't you just catch it for me? Well, HTTP status codes, Martin, they're kind of important. They tell Google about how to interpret the response we get. You tell crawlers and user browsers, this is a good URL. Here's some content, and all is good. I mean, that sounds good so far. But in the case of a soft 404, you tell us, this is fine. And then turn around and give us something that says otherwise. It's a bit backstabby. That doesn't sound good. Oh, and it's not good, Martin. HTTP statuses give crawlers and users valuable information. OK. So in that case, can you, like, elaborate on that a little bit more? Oh, all day. So for example, if you remove a page and give us a 404 status, we will remove the page from the index after a short while if it's not coming back online. I guess that makes perfect sense. And that's how it's supposed to be. But let's say this content wasn't removed. It has just moved to a new URL instead. We wouldn't know that. OK, that's fair, because I guess if we just remove it, then you have to rediscover it from scratch. And then you wouldn't know that there's a new thing somewhere else, right? That's right. And that's why using 301 to indicate a redirect is a much better idea. OK, so I got this. But I actually have an idea for you. So hold this. Like, this is, oh my god, I have a fantastic solution to the soft 404 problem. So how about this? I basically just add a noindex to all my pages, right? So the robot doesn't care. But then once I know that the cat that I'm loading actually exists, I change that to please index me. That solves the problem, right? Oh, well, that might not end well. Why? Look, we see a noindex early on. So we could stop the process right there and not take the removal through JavaScript into account. And all your pages would be marked as noindex in that case. So that would remove all my website from the index. That's a terrible idea. I understand that. Now, thank you very much for preventing that from happening. OK, right. Looking at the time, we don't want to hold you all too long. So I think it's time to summarize this a little bit. I mean, we have been talking about a bunch of edge cases. But luckily, all of those things are edge cases, right? As long as you follow the best practices and do the right things as a web developer and an SEO, you should pretty much be safe, right? Right, right. And I think it's important to mention that we're continuously working to improve things as well, like the new Chromium headless, the Chromium in indexing. It's good stuff. Yes, and also, so that's the big one, really. But also, it's quite cool that we discussed server-side rendering and server-side rendering with hydration and pre-rendering, because that can also improve the user's experience of your site, not just the SEO side of things. Yeah, hydration. Yes. Also, we talked a little bit about something else, and that was testing. Yeah, testing plus one. Sorry, basically, early test, often. Absolutely. You don't want any bad surprises. Bad surprises, not bad. Is there anywhere to find out more about this stuff, Martin? Indeed, that's a very good question. So again, we have a YouTube channel with lots of content. We also do bi-weekly office hours if we want to hang out with us online and ask us questions. That's a great idea to do that as well. And don't forget the docs. Oh, right. We keep adding new docs all the time. Yes, that's true. And I think the thing that we, both of us, would love to take away from the session for you guys. This is the main point here. We don't want you to vary too much on building stuff for search engines. We want you to build great user experiences on the web. Yes. Our goal is to empower you to build great discoverable web apps and websites. That's why the new Googlebot is such a big leap forward. It is an amazing thing. And we will continue to improve Googlebot and generally Google Search with new features and capabilities. So please, please, please continue to make fantastic content for your users and blow everyone away that comes to your website. And with that being said, I would like to say thank you so much for coming to the Strat. Woo-hoo. You did it. You're very welcome, Martin. It has been a great pleasure having you. And thank you all, Adi-Ann. Thank you so much for being here. If you have any questions, we'll be around. You can ask us questions, follow us on Twitter, check out our YouTube videos. Thanks.