 Hello, and welcome to the JavaScript SEO Office Hours here at Google Search Central. If you haven't seen them beforehand, we have Office Hours for general SEO questions, as well as JavaScript SEO questions, as well as Japanese Office Hours. And we have e-commerce Office Hours. So if you have questions, drop over to our YouTube channel at youtube.com slash c slash Google Search Central, and check the community tab for upcoming Office Hours. I'm super happy to see that a few people have joined today and a few people have submitted questions on the YouTube thread. So we'll go through the YouTube questions first, and then I'll open the floor for questions from the audience here. Excellent. So Jeff James asked, if part of a page is rendered server-side initially, then he uses client-side rendering. Does that influence Google's decision to render the JavaScript component of the page? We can see crawl logs where only about 20% of the server-side requests from Google but on a certain page show up for client-side versions or assets. Generally speaking, that should not make a decision or it should not influence our decision to render pages. We have pretty much rendering all the pages. But sometimes your JavaScript might time out or there might be a caching issue, which might mean that we are not seeing all your content. Unless you have an indexing issue, I wouldn't really think about this as long as your content is showing up in searches, then you should be fine. It's not really something that you need to actively monitor. Oh, hold on, the question is actually long. Is this, in contrast to a page where rendering is done exclusively with JavaScript? No, there is a heuristic that tries to understand this, but this heuristic is very rarely used and only for certain legacy domains. So it is not really a problem that you need to work with. Just make sure that you version your assets properly and it should be fine. Also, the amount of crawling does not really mean much. This might just be a crawl, what's it called? A crawl rate-related thing that you might be seeing in server logs. Also be aware that some fake bots pretend to be Google bots, so server logs need to be taken with a grain of salt. You need to verify that it's in the Google IP address that the crawl came from, because anyone can say, yeah, I'm Google bot. And we see lots of fake Google bots out there. So I wouldn't worry about that unless you see specific issues with content not showing up in search. Philip asked, how do you prefer to measure CPU and RAM consumption when the browser loads a URL or a website? And what are acceptable ranges? This may be more general, but in most cases, the problem is on the JavaScript side, and high CPU RAM consumption could lead to a partial indexing that the Google bot will not process all content on the URL. That is true. We do limit CPU consumption mostly. It's CPU time, actually, that we are limiting. But mostly to catch things such as infinite loops and other issues. So I wouldn't be very too much specifically about this. I have personally seen very few cases where this was a problem. And all of the cases I've seen, that was incorrect code, or basically broken code, that ran into infinite loops and then wasn't indexing the rest of the content, because the code never actually reached it. The general thing is I'd rather not give a specific time or range or whatever kind of measures because A, these are implementation details that can change at any point in time. B, we are also using heuristics, so it's not really a clear-cut number or range that I can give you. The general recommendation is make it as fast as you can, with as few CPU resources as you can. If it takes a MacBook two minutes of spinning the CPU at 100%, that may be not great for users either. So think about it from a user's perspective. If you are consuming CPU resources on end for lots of time, your users will probably be unhappy, especially on mobile phones. That is really a killer. So try to optimize as much as you can, but be reasonable about it, because in the end, if users are seeing your content and everyone's happy and it's also showing up in search, you shouldn't worry about this. Again, this is something that you can go into in profile and debug when you are seeing an issue. I wouldn't proactively look too much, unless you really hear your fans spinning up at 100%. After a couple of minutes of trying to load a website, that's usually a really bad sign. And those are the two questions we have from YouTube, which means we can now open up the floor for questions. And I've seen various people here in the call today, so that's pretty cool. If you have any questions, feel free to just ask them. You can also use the chat function on the right-hand side of this Google Meet recording, if you don't want to ask a question by voice. Any questions? Hi, Martin. Are you OK? Hi, Dave. Just a quick one. Content visibility, the new CSS property that starts rendering until it hits the viewport. And is there any implications from Google's bot side used in this at all? I don't think that is any problem with it. I haven't tested it yet. That's a really good question. I love it when you're in these Hangouts because you always ask questions when I'm like, oh, wait. I should actually definitely test this. I expect this to work out of the box as we are upgrading the Chromium renderer, and it should fall back reasonably. But I definitely recommend testing this out and making your own conclusions until I have had a look, and probably we have some guidance on this. If you don't see any guidance coming up in the next couple of months, that just means it works out of the box. It doesn't need any specific things. And to be honest, if it doesn't work, then that's a problem on our side that we will need to fix sooner rather than later. Very good question. Thank you. No, thank you. Thank you for the answer. Very happy. I know that I haven't tested it. It's not the greatest answer ever, but there's like so many things that I need to test. More hours in a day, that's what we need. Definitely. All right. Anyone else having a question? Now is the opportunity. You can, again, also use the chat if you'd rather not pipe up. Let me refresh the YouTube set, because sometimes people are posting to YouTube a little late. No, it doesn't look like it. Newest first. Yeah, this is a really unfortunate thing that you have to actually say sort by newest first to actually see all the comments. Any questions? Yes. Hi, Martin. Hi. Hi. Awesome. Hey. I have a question about core web vitals, particularly the fact that the APIs, as far as we know, are currently not supported on Firefox and Safari, but they're only supported by the Blink-based browsers. So my question is, do you have any recommendations if you're looking to test and optimize websites, particularly for those browsers? Are there any particular metrics that you would use to replace like just contentful paint? Like what are the ones, if you had to come up with, let's say, a workflow, are there any particular metrics that you'd use and how would you go over this problem? I'm not sure about Safari's DevTools, but I know that the Firefox has great developer tools, but they don't really have anything that allows you to gather real user metrics. So you would have to fall back to gathering field data, as in not field data. Lab data, sorry. So if you don't get any field data, then lab data is not perfect, but it is better than having no data whatsoever. So I would definitely just dive into the developer tools and do like the performance audits with the developer tools that you have there and probably also try to find some users or someone in your team who happens to have devices that are not top-of-the-line devices to actually run the performance profiling there as well. But that's the best you can do. I do hope that the proposed APIs are landing in other browsers as well, because I think having more field data is just generally useful. But if you can't, then at least use some lab data. I see, thanks. Sorry for not having a good answer. This is like every browser's figuring out this performance thing so far. So it takes a while for everyone to come up with ways of gathering information. Also, I think Firefox might have a philosophical stance on the point of gathering field data. Can't speak for Mozilla though. All right, more questions? Five comments. If there's no one, maybe just a follow-up question, then what would be like, I'm not sure if you're the right person to ask because I don't know how much invested you are in web performance, actually, but let's say you cannot use largest contentful paint. What would be your other, let's say, favorite or maybe not favorite, but most useful metrics for measuring render performance? I do. Do you have any thoughts on that? Yeah, I do. Actually, I am relatively invested in web performance and I used to be very, very invested having worked for a startup that does VR on the web for architectural applications. I, if I can't do that, I would look into frame timings. I would try my best to see if we can basically hit 60 frames per second as quickly as possible because that's a good approximation for A, does something hog up the GPU and B, does something hog up the main thread because if something is blocking the main thread then that usually results in frame drops quite quickly. Okay, cool, thanks. You're welcome. Also, frame timings are relatively, well, relatively easy to measure. I'm saying relatively to other things like largest contentful paint is really tricky to measure yourself unless you have the APIs that give you this information. Whereas frame timings, you can measure that within a request animation frame loop and then measuring the times. It's still not easy. It's still not easy to get accurate frame timings but it's easier than the other options, I think. All right. Any, okay, here's a question from Mohammed in the chat. How long a bot can wait for rendering a JavaScript code? I'm guessing JV code is JavaScript. Well, I don't like that this question gets asked because it's telling me like someone is bargaining with a bot and that's a bad idea. As quickly as possible, do it as quickly as possible. We do wait quite a bit but users don't. So even if for the bot it is okay, users might not be so happy. So try to get your JavaScript over the wire as quickly as possible and get the content on screen as quickly as possible. If it takes, if you're talking 10s of seconds then you are already having a problem, right? I know that some websites take minutes and they are still indexing fine but they're definitely not making users happy. So, all right. Any more questions? Seeing that we have nine people here. Well, that's eight of me, right? Now is your opportunity. Don't be shy. And again, if you are shy, that's perfectly fine. You can also use the chat. Asking questions in the chat is perfectly acceptable. Okay, in that case, let me ask a question. If you are here in these JavaScript SEO office hours, what brought you here? What is a thing that you are concerned about? What is a thing that you have experienced that didn't go so well? What kind of JavaScript frameworks have you been working with? Feel free to put it in the chat or just talk. That's also perfectly fine. Quiet group today, huh? Also, I'm trying not to butcher names but I'm really not sure how to pronounce. Is it a Polish name? Yes, it's a Polish name. Is it Zimowit? Zimowit. Zimowit, ha! I wasn't too bad. I was super afraid to answer your question and start with a name that is pronounced incorrectly and I'm very sorry that I was a little scared about screwing it up. So, Zimowit. That's correct. And I completely understand it's rare even for Poland so I'm used to it. Interesting. I'm super sorry again. I'm super happy that you're here and I'm very happy to learn. You know what? I'll see if I can pronounce people's names. So, Caroline is probably correct. Dave, Henning. Henning, you from Germany or Austria or Switzerland? Sounds like a very German name. Yes, from Germany, Hamburg. Hamburg! Yeah, but I live in Barcelona. Oh, wow! Okay, yeah. That's really nice. That's lovely. Yes, yeah. I mean, Hamburg is nice too. It's just a little more cold, I guess. Yes, great. Great, great as well. Great is probably also a problem, that's true. Exactly. Cool, Barcelona. Coming from Hamburg to Barcelona, that must have been an interesting change of pace and scenery. My wife come from here. Over the years, we spent so much time here before we changed so nothing new for me. Right, it wasn't like a drastic change. Okay, I get that. No, not that. And then I think we have Mohamed or Mohamed. I'm never sure which way to pronounce these. Sevillei, Tamas, Tomas, Tomek. Come on, let me call it Tomek. And Tamas is right? Yeah, Tomas, yeah. From Hungary. That's the official name, but everybody says Tomek. So we have Tamas from Poland and then we have Tamas from Hungary. Nice. Exactly. That's really cool. Unknown, I think, is Simon, but I'm not sure. And then we have Zimovic, okay. And then we actually have a comment from Caroline in the chat. I used to work for a company who used Ajax JavaScript to create product pages. It is commonly discussed in the SEO world that Ajax is not ideal for SEO. It also has seen pages got not indexed properly, but what do you think, and is Ajax really not ideal for SEO? Well, that's a really good question, Caroline. Thank you very much for sharing this with us. So what Ajax really means is it's actually a bit of an anachronismos, or anachronism, I think is the English word. Anachronismos is a very German way of thinking about it. Anyway, it's like, okay. So when we used to build websites in the olden days, what would happen is you make a request to the server, you get the website, and then that's it. That's the entire interaction that happens. And then at some point, we're like, but it would be cool to kind of get data back to the server, like if I want to contact for more guest book, right? Or comments. So the way that worked is basically you go to the website, the server responds with the HTML, and then some form in the HTML. And then you would fill that in, click Submit. That would submit it to the server. The server would send back a new website, pretty much. That's how that worked, which means you have this page refresh in the middle. So you have this flash of white in the middle. When the server gets the data from you and before it has given you the response back, like, thanks for your contact request, or here's the new website, the new article with a new guest blog, not guest blog, guest book comment that you just wrote in it. And then we came up with this idea of like, hey, hold on, couldn't we, instead of basically refreshing the entire website, couldn't we just use JavaScript to send? So you have this form, you have this form where you can type in your comment under a blog post, for instance. And you hit Submit, and instead of actually refreshing the entire website, JavaScript makes the request in the background, and then the server responds back, and then we can reload the page. Or even better, we can basically just change the page because we have all the page so we can add new stuff using JavaScript. And that's where Ajax comes from. Ajax stands for Asynchronous JavaScript and XML. So originally you would make an asynchronous, which means in the background kind of. Asynchronous means a little different, but like for the sake of this explanation, in the background is good enough. So in the background JavaScript makes a request and gets XML back, and then from that XML, you would create the HTML that you need. And then there were variations of it. So AJ as in AJAH would be that basically asynchronous JavaScript and HTML. So the response back to that JavaScript request would be HTML, and then you can just plug that HTML straight into the page. And then there was AJ, I guess, you would pronounce it AJAJ, which is asynchronous JavaScript and JSON, JSON just being a different representation. So XML, HTML and JSON are just structured ways of sending data. If you look at an HTML document, it's structured information, right? It's like, this is the body, this is the title, this is the headline, this is the sub headline, this is a link, this is a button and so on and so forth. It's just structured information. You can use other things like XML is more flexible way of structuring information. And JavaScript object notation or JSON in short is another way of structuring data. And is this bad for SEO? Yes and no. It made life harder for search engines because no longer we can just make a request to the server and get the website because now some content might be loaded with JavaScript. So you make a request, get a pretty much empty website back and then JavaScript does these AJACs requests to actually fetch more data and put it into the website. But since a couple of years, Google has been at least very capable in terms of running JavaScript. So the Ajax requests themselves are not really a problem, but they are additional complexity and complexity is usually a thing that you want to minimize as much as you can. You cannot really avoid it, but you can minimize it. And the reason why I'm saying is that there are more ways of things going wrong, right? More ways for things to go wrong. For instance, what if this JavaScript request that fetches data to show on the website is roboted? If for some reason like your JavaScript makes a request to slash API slash cats to get a list of cats to show on the website and then robots.txt says slash API is disallowed. It works in the browser because the browser doesn't really care for a robots.txt but it doesn't work in Google search because we do care for the robots.txt and then we are basically seeing this empty website, we are running the JavaScript, the JavaScript is like can you fetch me slash API slash cats and then the robots.txt says no, you cannot. And then we're like, oops, it's harder to test these things. It's harder to fix these things. So it isn't fantastic for SEO, but it's not really a problem either because once you implement it properly it's fine, it's not a problem. Yet I would say it's additional complexity that you don't necessarily need unless you have a good reason. Thank you very much for explaining it that well. It's really good, thank you. You're welcome. Try my best here. Murmed is asking, does it affect the rest of the content if some JavaScript code is blocking rendering some part of the content on the page? Yeah, yeah. So let's say like you have a piece of JavaScript that doesn't finish forever, it basically is stuck. We would run through the page, we would run that JavaScript and then get stuck and eventually it would stop rendering because it's like, okay, this gets stuck so we have to try again later and in that case, we don't see the content that this JavaScript would fetch, we don't see the content that would be in the HTML after the blocking JavaScript either because the processing would be stuck at that point. So that is a potential problem in terms of indexing. All right, do we have more questions? Now that we all got to know each other a little bit and got the ball rolling. And again, feel free to use the chat if you don't want to speak up. Oh, Henning is asking, is there a problem with pre-rendered pages for the Google bot with pre-render.io? Sometimes these pages are not up to date anymore and therefore the resources, so like JavaScript and CSS are no longer accessible because the hash in the file name has changed. Can this influence the indexing or does it only influence rendering? Well, so what pre-render.io does is basically dynamic rendering and it is tricky. So the reason why pre-rendering is a tricky thing and when I say pre-rendering, I don't mean the actual pre-rendering. Okay, let me run back. So there's a bunch of different ways of doing things, right? One way is, let's say like you have a blog. This blog only ever changes if you update a blog post or create a new one or remove an old one. So you know exactly when it changes. If you are using some JavaScript, you could basically run this on your machine and make it generate all the HTML and then only upload the HTML because this HTML only changes when you make a change. That actually is pre-rendering or what's usually named pre-rendering in developer circles. Then there is a client-side rendering where you have a bunch of templates in HTML and then you have some JavaScript that catches data from APIs or backends or CMSs or whatever, it doesn't really matter. The JavaScript in the browser grabs the additional content and puts it in the HTML, that's client-side rendering. And then you could do server-side rendering which you can do with JavaScript, with Python, with Perl, with PHP, with Java, with ASP.NET, with Golang, with Rust, I don't know. Basically that works by sending some request and some program runs and that could be a JavaScript program generating the HTML and then you send the HTML back. That also works. And then there's what we call dynamic rendering that you normally only do for bots with things like Renditron or pre-render.io or any of the other providers out there. And what that does is it basically a request comes to the pre-rendering service from Googlebot or from users if you configure it that way comes to one of the pre-rendering services. The pre-rendering services open a browser on the server. That then makes a request to the actual server. It then gets the response back, runs all the JavaScript, waits until there's HTML in the browser and then takes that HTML snapshot and sends it over. Do you see a problem here? You should because opening a browser on the server, opening the page in that browser, waiting for that page to load, that takes time. So these requests are usually a lot slower and I don't know, have you ever seen a browser crash? I did, just this morning a browser crashed on me on my phone, a few days ago, my browser crashed on my laptop, browsers can crash. So what happens then? Well, then we have to catch that crash, then we have to reopen the browser. It takes even longer. If our code here in the pre-render solution isn't great, then maybe it doesn't notice that the browser crashed sends back empty HTML. So it invites a whole new load of problems and it takes a long time. So because it taking so long, oftentimes you would instead basically cash these things. So you would say like, oh, we don't always open it in a browser. We open it once and then cash the HTML that we get for a day, for an hour, for five minutes because that's usually good enough. Sometimes these caching decisions are not great because you might catch too long or you might not catch long enough and then things get slow again. So pre-rendering isn't the silver bullet that magically makes all the problems disappear. If this happens and it tries to access resources that it doesn't have anymore, with a CSS it's probably fine because that's like the web will succeed nonetheless. Like it should be fine. The content should still be there. It might look weird, but it doesn't really matter that much. If the JavaScript resources fail and the JavaScript is what actually fetches the content, then the rendered HTML by the pre-render solution might miss the content. And if it misses the content, then the indexing won't see it. So it's not just the render issue. If content is missing in rendering or after rendering, then indexing can't index it either. So that can immediately become or escalate into an indexing issue for you, which is why I would keep the old assets around for a while to make sure that you're not running into these problems. I don't know how long pre-render IO caches things, but yeah, I would keep the old versions around. Also, I see that Tomek has raised his hand. Tomek, what can I do for you? Hey, Martin, hello, everyone. I have a question for you, Martin. Sure. What kind of analysis you run on initial HTML? What kind of analysis do we run on initial HTML? Pretty much anything that we also run on, well, that's not true. We run a few things on the initial HTML. One thing is it's early link extraction. So once we have HTML, we can immediately have a look if this looks like a link or if it looks it, but if it has something in it that looks like links because then we can immediately queue them for the scheduler. We'd obviously do that again once we have rendered because usually that produces more links that we need to queue for crawling. But that's something that we do on the initial HTML. We also, I think, try to understand if this is an error or not. So if it comes back with the 404, that's not really the HTML, but if the initial crawl comes back with the 404, then we treat it as such. So you can't really use a 404 page to run JavaScript to load some content. I've tried that in the past. It's got everything in the index, not a good strategy, really not a good strategy. We also look for meta tags. So if there are canonicals in there, if there's a meta description in there, or if there is a meta robots in there, we definitely do look at those, which also means that if you say no index in the meta robots in the initial HTML, and then the JavaScript runs, you have a problem, because when the initial HTML contains a meta no index, we're like, oh, this page doesn't wanna be indexed, so we don't need to render this because you already told us you don't want this to be indexed. So be very, really careful with that. I think we also try to do canonicalization at this point, or at least like Ddub management, which sometimes does not give us clear signals. And in that case, we might ignore the initial canonical tag. We might ignore the initial, basically content hashing that we do on the initial HTML if the content hash differs in the rendered HTML. So canonicalization being a longer running process starts in the initial HTML, but also takes into account what happens in rendering. What else do we do in the initial HTML? Nothing that I can think of right now. Let me actually see if I can, do I have the slides for this, the Gary copy of the slides, but that works as well. Let me see if I can, if I forgot something. I wish we would make this public. I keep pushing Gary to give a specific presentation that he gives for new search Googlers, because it has a lot of useful information in it, but we don't really have everything in this shared yet. We have everything in here roughly, but we don't have it in this specific format, which I think is awesome. And I hope that he makes a video one day. What else do we do? Content parsing. Yeah, I know that we don't do that for the HTML, but if it's like a PDF or something, we would parse that. Yeah, we gather some signals already, but we also then mix them with what we get from the rendering. So most of it happens also after rendering again. Does that help? What about duplicate content detection, regardless of canonicalization, et cetera? Well, canonicalization and dub, or de-dupping are related, right? So we basically, when we say canonicalization, it really is de-duplication. We first see all the pages that are similar enough and then we elect which of these is the canonical. So it's kind of like it's related to each other. We do that, as I said, we do that on the initial HTML, but we also do it in the rendered HTML. So I don't think there is a chance that we would kick out a page without having it rendered first. So we do get the hashes immediately, but we compare them to what we get after rendering. And usually in rendering, we get different hashes and then that's what we use continuously. Unless we get the same hashes and we're like, yeah, no, this is actually an empty page or this is actually a page that we already have in the index. Mm-hmm. Okay, thank you. You're welcome. Any more questions? And again, feel free to both use the chat or just speak up or actually the raise hand feature is kind of, oh, Dave is raising his hand. Hi, yeah, it's about, when you fetching stuff from an API, et cetera, I'm just like, if you do a get one, does Google cash that the same as it would another resource? It does. Yeah. But if you did a post, would it not cash that? Then we would not cash it. Yes, that's correct. Okay. Just another one as well with calls and stuff. I mean, it could be logging on my side of login area, but I've never seen pre-flight requests, an option request come from Googlebot or it doesn't seem that way. And yet you kind of expect that. Does Google handle it in a different way or is it just my login's not great? That's an excellent question. I think we do not follow the course protocol because we're going through the, we're kind of not really a user acting upon things, but I'm not a hundred percent sure that's a really good. Ah, Dave, every time you ask a question, you create work for me and I love that. That's fantastic. I would love more questions like this because I did a course test once, but I can't remember what the outcome of it was. Yeah, I need to find that out. Thank you. That's a really good question. Right, anyone else raising their hands? No, anyone else having questions in the chat? No, feel free to drop no questions. Definitely do that. Also really happy to see like a good mixed bunch of people here. And I think like we have lots of different locations, right? So we have Hungary, we've seen Poland, we have seen Germany. No, hold on. We don't have anyone from Germany. Well, we have Switzerland. That's me. I think Simon is in the call as unknown. And I think he's based in Germany. What other countries do we have? Barcelona. So we have Spain involved as well. UK. UK. But I'm from Germany, so interesting. Interesting. Oh, we're from Germany. Stuttgart. Not too far away. Interesting. So we had, well, UK, yeah, sure. Dave is also from the UK, Pakistan. Wow. Global office hours today. Awesome. Do we have more questions on YouTube? No. All right. Hi, Martin. Hi. So I had one question regarding this fetching and finding the content as duplicate. In two wave indexing system in initial HTML, does it happen that in an initial HTML, Google could not find out duplicate content. Hence it indexed the page. Then after a final rendering and indexing, it figures out that the page is duplicate, therefore removes the page from indexing. Okay. That's a very good question. Niraj, is it? Yeah. Yay. Niraj, thanks for the question. Good question. It's one of the questions that haunts me the most. Not this specific question, but the two waves of indexing has been a metaphor that has slightly gone wrong, I think, because there's not really two waves of indexing. We are pretty much processing the initial HTML and then decide to render and then index. So pretty much anything that gets indexed has been rendered in the middle of it. That's why I'm always a little conservative when people are like, this has been indexed without rendering. Most pages actually, like pretty much nearly a hundred percent of the pages get rendered before they get indexed. Hypothetically, what you are describing is possible if rendering continuously failed for some reason. If there's some JavaScript that got stuck, if there's some fundamental error, and then it might be that the page starts flip-flopping. It is also just possible that what you are seeing, especially if you think that it's dropping off the index and coming back into the index, sometimes what happens is that there is a change in the canonicalization. As I said, we put all the pages that we think are roughly duplicated into a cluster and then we choose which one we think should be the canonical. And sometimes the only thing that happens is like, we understand that this is a duplicate page and it's just like different canonicals are being chosen thanks to signals fluctuating. That is normally not related to rendering. That's normally related to other things because we are combining a bunch of signals into the leader or canonical selection within these duplication clusters. But normally no, there is very few opportunities for a page to be indexed without rendering. So it's very, very unlikely that a page gets not seen as duplicate before rendering and then seen as duplicate after rendering unless there's something really funky going on. Okay. And what happens if our content is coming from API and API gets failed and Google retries and then only the content is appearing, then only it will be able to. Yeah, that is a problem because then we might be seeing, depending on how it failed, we might just not see the content that comes from the API and then basically cluster up URLs that are not technically the same just because multiple URLs have failures to the API. That's why it's very important to make sure that you have mechanisms in place to make sure that your API doesn't fail. And if your API is flaky, then that's something that I would investigate. Yeah. Okay, thank you. You're welcome. Parmek has raised his hand. So when there is no index on our page, do you take structured data into consideration or not? I don't think we do because if you think about it, structured data is then used for rich results mostly and some other additional semantic enrichment, but it's basically if the information we have on the page says do not put me in the index, then I guess we stop processing before we would look at structured data as well. What about link extraction in that case? Because that was one story. That is a good question. I don't know because these processes should be running in parallel. So I could imagine us to actually do link extraction on the page that tells us no index and then only find out that it's no indexed after we already started scheduling URLs for crawling, especially if the crawling has already happened and we can't uncrawl something. If a request has been made, it has been made. But I guess you might see mixed signals here. Sometimes we might discover the link, sometimes we might not. And really it's a case of like if you don't want something in the index, put a no index on it. And if we discover the other links, that's not a problem unless you also don't want these things to be indexed. If you don't want these things to be not indexed, put a no index on them as well. As the clearer you can make the signals, the easier it is for us to not do something that you don't expect. Makes sense? Makes sense. Awesome. Let's check the chat. Yeah, Henning, I said Katalunge. I said Spain or actually Katalunge. So I'm aware that there is sensitivity in that area. Anyone else with questions? Yeah, I mean, may have overheard it because I said Spain and I was like, no, actually. It's Katalunge. Someone asks for this link on Twitter. So I'll be so nice and share the link on Twitter as well. That's why wouldn't I? There we go. Maybe someone else joins us today. If not, feel free to ask your questions either in the chat or by speaking up or raising your hand. That's actually, I keep forgetting that we have this feature now. That's really nice. I like that feature. Ooh, holy moly. I just, ooh, I'm not sure if these are. So I'm enrolled for experiments in Hangouts and now I need to find out if certain features are actually available to everyone or just basically aren't testing. And let me see. Because I don't wanna use a cool new feature if I'm not sure that feature is actually public yet. I don't wanna accidentally leak things. That would be awkward. That would be so awkward. Let me see. Do I have these features here as well? No, but then again, I also don't have the raise hand feature here. Oh God. If only I knew if this is public or not. Dang it. Because I just saw a few nice, nice features. Actually, I can probably try to read. Okay, ha ha. Okay, it's actually public. I can start a poll. Which JavaScript framework do you mainly work with? Question mark and then there's like Angular React view. If there's an I can launch this. I don't know how it shows up for you but we now actually do have a poll, a vote that you can vote with. Oh yeah, okay. So people are already, oh. Oh, oh wow, okay. Ah ha, Angular and React taking the lead. React taking the lead, that's what I expected to be honest. For those of you who selected other, what are you working with? A selected view, but I do use other probably a bit more now. I use Svelte and Sapa a lot, particularly Svelte, Svelte. Yeah, someone just said Svelte in the chat as well. So Svelte seems to be, should I do a live stream with Svelte? Probably should, that might be interesting. JavaScript, noob, that's fine. Every one of us started at some point and if you watch the Twitch streams that I do every now and then, you'll see that even I struggle with like syntax and stuff like it's, that's no shame in that, it's cool. Awesome that you are getting into Svelte. G move it, so that's pretty cool. Hold on, what happens in the poll? Angular and React leading, cool. That's more or less what I expected. Just quick question for the Angular people. Would you consider your projects or companies more enterprisey or more startup-y? Because I have a feeling, I have suspicion on which environments Angular is used more and in which environments React is used more, but maybe I'm wrong, maybe I'm just biased. I could run a separate poll for this. But then again, how does this work? Aha, aha, okay, well, that doesn't seem so useful. But the polls feature is quite interesting, that's nice. So we ran a poll. I wonder how that comes out in the recording. In case it doesn't come out in the recording, I asked which JavaScript framework people mainly work with. I got eight responses and three people use Angular, three people use React. One person uses Vue and one person uses Svelte as we just found out. Awesome. In case anyone has a question, I think we are running out of time, but I will take one more question. Yes, Asaf, I did cut my hair. Thank you very much. It is more useful when diving long hair in cold air. Don't go well together, so I'm pretty happy that I don't have to deal with long hair in cold weather after diving anymore. Even though I do miss the long hair a little bit, it feels weird as well. I can still color it, but it might as well not. I think colors come out nicer when you have a transition. And for transition, you need a certain length. I also still have some blue and blonde bits in my hair, and it looks a bit weird, I think. But it'll disappear eventually. It's fine. All right. Final countdown. Five, four, questions. Three, two, one. OK. Ladies and gents, it has been a huge pleasure hosting the office hours. The next office hours will be actually, I think, next week or is it in two weeks? I confuse myself with how I schedule these days. No, it's in two weeks. It will be in the later time zone as well. It will be at 5 PM CET, so it will be seven hours later than this one. And then in four weeks, we'll be in this time slot again. Has been a huge pleasure. I'll upload the YouTube video as soon as I get the recording. And I wish you all a fantastic day. And for those of you who are watching this as the YouTube recording, feel free to drop in your questions, go to our channel, go to the community tab, and watch for the next announcement post for the SEO office hours. Has been a huge pleasure having you all here, answering all your lovely questions. See you soon. And I hope to see you all in the next upcoming Hangouts as well. Bye. Bye, Martin. Bye. Bye.