 And welcome to another JavaScript SEO office hours recording or public office hours hangout. With me are 10 people in this hangout tonight. And you have submitted various questions on YouTube. So let's see if we find the time to go through all of these questions. Before I jump into the YouTube questions, anyone here in the hangout having any questions so far? Three. Yes, there you go. WebSockets. Obviously Googlebot currently doesn't support WebSockets and kind of things. Do you think that will ever change? Because sometimes you see things like, I mean, I can understand why normally. Are there things like Firebase that has its live connection and there's some of the things that sometimes use WebSocket. And Blazor as well, which is a new thing which is kind of compiling soon. So do you think that will ever change or you think it's always going to stay as it is? So while I cannot make any comments on the future really because I don't know what's going to happen, the fundamental goal for Googlebot and Google and Google Search is to make the world's information generally accessible, including the web obviously. And so if we see a major trend going to WebSockets for essential communications, we will eventually probably have it as well. At this point, it is a very niche technology in the sense of that most of the crucial content that people consume does not come over WebSockets or it has a fallback mechanism. And as you said, we are not supporting it at this point in time, but I can't make predictions for the future. There's nothing. It's not that I am like, oh yeah, we are about to release this. No, there's no plans to be communicated at this point for WebSockets. But very good question. Cool. I will be typing these questions down as well so that I have a running log of questions so that I can identify more frequent questions, which usually is a hint for us that we need to be addressing these additionally in the documentation. So if you see me typing, that's not because I'm chatting with my coworkers with John or something, but it's that I am trying to keep track of what gets asked. Thank you very much. Any other questions from the audience? If not, then I'll start with some questions from YouTube. I've seen that multiple people on YouTube submitted questions around lazy loading, which is really interesting for me because it tells me that I might want to look into our lazy loading guidelines again, as it's apparently not clear yet. So to start with one, Lighthouse recommends lazy loading obstacle images. I usually do this with JavaScript, and I use either a placeholder or a very low quality version of the image as the initial source. Is this low quality version or placeholder likely to be indexed over the lazy loaded images with JavaScript? That depends on your implementation. If you use the testing tools, you see what we are seeing. If we are seeing the higher quality versions or the non-placeholder images that you load using JavaScript, that will be fine. If in the testing tools, you see that the rendered HTML contains the low quality version or the placeholder image, that means that there's something in your lazy load implementation that isn't quite right, and we would not see the higher quality version or the actual image lazy loaded by JavaScript in that case. So use the testing tools to determine if your setup works for us. The other thing is you can also use the native lazy loading attributes for images so that you have something that degrades nicely or actually progressively enhances really. So a browser would only see the regular image and you would specify just the image source that you want, the high resolution image source that you want, and then Googlebot would also see that and you would lazy load on top of it when the browser supports it. So that's not an issue there and it's built into the browser so you don't have to worry about implementation problems with your JavaScript implementation. Question, how can I ensure that a one-time notification on each page is not indexed in Google? Is an on-load event a solution because of this one-time notification we are now found on searches including the term Corona? Well, the on-load, there's like two different questions really in this one. Do we not see things that run on, or in the on-load event handler? No, we're gonna see those. So that is not a very solid way of making sure that we are not seeing your notification. We might sometimes not see things when they only trigger on the on-load event but that's very unlikely and that is probably because weird or shorty JavaScript. I would very simply check if the navigator user agent is Googlebot and then if it is, don't show the notification. There's one way of doing it. An alternative way is to hide it behind a user interaction so only users that scroll or click or do something on the page somehow see the notification. I think for Corona warning notifications you just wanna have whatever, like it's basically the same situation as with a cookie banner. So you could definitely use something like a button to actually only show the notification when it has to, you know, when the user has interacted with the page or when the user clicked on the page or something like that. If you don't want to wait for an interaction, I would go for the loading it only when Googlebot is not there or basically sniffing out if it's Googlebot or not. It's not a great practice, but it is not cloaking either because what you're doing is not swapping out content to mislead the user, you're just loading additional content. It's basically the same situation as if you would do server-side or dynamic rendering where the content might be slightly different but it is still the same content that the user would expect coming to your page unless the notice for Corona would completely remove the entire content and only load that would not be a great idea but use that wisely and carefully so you won't be within cloaking limits so you should be fine. Questions from you all or should I continue with the YouTube questions? Yeah, on the topic of cloaking, can you go a bit more in detail what is considered as cloaking and what's not, or where is the gray line or I don't know. So that is a really interesting question. Where is the gray line? Where is the gray line? I can't do much in terms of giving you too much detail for that but fundamentally cloaking means misleading the user. That means if I see a Googlebot is requesting my site and I say this website is about kittens and butterflies and then when it's a user going to that website instead of Googlebot having I don't know like an online drugstore or trying to sell knockoff products or something like that that would be very much against the intention of the user and that would not match what we would show in search results if the user searches for cute kitten or something like that, right? So that's very, very clearly cloaking. What isn't cloaking is if my website content is slightly different because we all know that with responsive web design we might have slightly different content to begin with. On a mobile phone, I might only be allowed one product instead of 10 products or something like that and then have the user click through multiple pages or something like that. That's not cloaking. That's just slightly different content depending on what the browser can do or what the device capabilities are, that is fine. If you show slightly different content for Googlebot then for real users like a notification or a pop-up that doesn't show when Googlebot comes in that is mostly fine unless it is like a pop-up that has 90% of the content in it and on the actual page is only an image. Then we are again was like, hmm, is that, does that still fall within the grounds of like the user sees what the user expects or what we saw as Googlebot? But generally speaking, as long as you're not misleading the user, you're definitely on the safe side. What you shouldn't be doing is you shouldn't be misleading your user. Anything that is within reasonable bounds of that is not a problem. OK, because I can think of two examples. Like for example, loading this dynamic approach like server-side rendering some part of the page but then actually enhance the behavior with JavaScript. But the content actually is the same, OK? Or a more extreme case, the second case is actually loading the translated version via JavaScript. That is a little tricky one. So the first case is definitely not cloaking. The first one is just slightly different content but it has the same topic. I have the same intention when I go there, I would say. When you load completely different content in the sense of a completely different language version, that is trickier. We do have mechanisms for that kind of stuff. So for instance, you can use an alternative URL to something that is linked via hreflang. That would be safer, whereas when we are like, OK, so this website is about, I don't know, cats and it's very clearly is about cats. But then when I go there from my browser and it says cuts and that might not always be easy for us to distinguish that this is just the same content just translated. So I would be very careful doing that kind of stuff. But you see like there are certain gray areas and it might be that this works actually. I don't know. But that sounds to me like there is potential for this going wrong. So I would probably tread carefully but just loading slightly different versions of the content because of server-side rendering versus client-side rendering. That's not something that you need to worry about definitely. OK, thank you very much. Awesome. You're very much welcome. Other questions from the Hangout? Or I go back to YouTube. OK. A question from a submitted question from YouTube. I have a site where it's Angular based and all of the content, including MilaTax, Tidal, Canonical, and the site content itself is rendered on the client side. Will this affect our ranking? No, unless it's broken and we're not seeing your content, it will not affect your ranking. If your website is very, very slow, that might affect your ranking because speed is a ranking factor, but it's only one out of hundreds. So you shouldn't worry too much about this but definitely test, test, test, test if you do that. I also noticed that GSC, Google Search Console, is warning us on mobile issues for tag archive pages due to its rendering on first load as the CSS gets injected in later. Any recommendations? Try to load the critical CSS as quickly as possible. That's also a recommendation in Lighthouse, by the way. And then only load CSS in later that is non-essential for the initial render. That should make these warnings go away because that might actually cause trouble when we try to figure out if a page is mobile-friendly or not. So I would recommend to get as much CSS, as much critical CSS inlined into the page as possible, but not too much for the obvious reasons that then your HTML is too large and so on and so forth. Lighthouse has pretty good guidance on that. So check out Lighthouse for your CSS troubles. Questions from you all? Because maybe these inspired you. OK, I take one more from YouTube, and then we'll see if questions come up here. Hi, Martin, what is your recommendation for the most optimal way of troubleshooting if a, sorry, there's someone trying to get into the, oh, the pop-up has disappeared. OK. Hi, Martin, what is your recommendation for the most optimal way of troubleshooting if a JavaScript-based website is having issues with slow indexing, where content is not indexed immediately due to longer processing times? Not 100% sure what this means. If this is aiming at the two ways of indexing, don't worry about them. We have discussed that last time. If you do see that users are taking a very long time until or for users, your website is very slow, that's something that you want to improve on. I highly recommend using the web page test or a Lighthouse to get a feeling for how fast your website is on people's devices. But generally speaking, slow indexing, because of long processing times, is not really that much of a concern these days. Anyone having a question to jump in with? Yeah, can you hear me? So now it's working for a little bit of trouble with my Mac. So I have a question about structure data. We have a dynamic render page and add to this structure data. And we don't know if Google can get this normally with the render engine from it. Or it's a little bit difficult for Google to see the structure data. But then we didn't know, has the structure data or not? We don't see anything under Google search results. Right. That's a very good question. When you say dynamic rendering, do you mean as in you are using something like Rendertron or prerender.io, or do you mean server-side rendering? So we don't do prerenderings. We only deliver at the moment client-side. So the Googlebot, I think, the Googlebot do prerendering with Puppeteer. Also, you talk about in your stream. Yes, pretty much. Rendertron, I mean. Yeah. Yes, yes. We are not actually using Rendertron, but we do render pages. So that we have to do. I can show you something. So to give you an example for none of these, to give you an example of what you can do is I have created a few test pages to show this. Here, we have a website that, wait, hold on, does not actually I'll try to make this larger so that it's easier to read, actually. Give me a second. I think, yeah, this should be better. We don't need that at this point. So this website does not have any structured data in it. As you can see, there's no structured data here, but it does use Google Tech Manager. And if you check out this one, then you see that it does have some structured data using Google Tech Manager. It's injected dynamically. And when I want to know if Google actually sees this, I can just go to Google.gle, rich results. So the rich results test, I don't need that anymore. And then, if I run this, I will see, I know. Thank you very much. I will see if it gets picked up or not, and it gets picked up. In this case, it has successfully picked up that there's some organization markup and that there's a logo in there. And you can use the rendered HTML to see if your structured data shows up. If it isn't the rendered HTML, that means that we are also seeing it. So generally speaking, it should not be a problem. If you need some support for that statement, then you can check out our developer documentation because under Guides Enable Rich Results, there is generated structured data with JavaScript that has a lot more details than what I just described. But basically, we have documentation for this. Now, the question is, why am I not seeing this in Google Search Results? That's because structured data is a necessity or a requirement to actually be eligible for rich results, but does not mean that we will always display rich results. So even if your structured data implementation is correct, we might choose not to show rich results because for, I don't know, various reasons. There might be a faster website. There might be a website that has higher ranking information. There's lots of factors to consider. It's basically a ranking question, and I can't really comment on ranking questions. But you make yourself eligible by having validating structured data, and you can use the Rich Results test to see if we are picking it up. So to understand this, if I add this literate rich result, it's not guaranteed that Google show this. So correct. But if Google shown nothing structured data from anything else with the search result, this is OK. This can be the problem from size ranking or something else. It can be all sorts of things that make it not show up, mostly ranking related things. As long as we are picking it up, and also Search Console should show you if we have picked up the structured data. If it is in Search Console and or you can see it in Rich Results test, then you have successfully implemented it on the technology side of things. It doesn't guarantee that it shows up in Search Results. Perfect. Thank you. You're welcome. I have a question in the chat. I have a question from my developer partner who doesn't want to ask himself directly. That's fair. We sometimes see websites which check the user agent in order to set specific behaviors for the Google bot. Can it trigger something? Can it be seen as potential cloaking? And if so, how does Google work? It's very common here in our bilingual nation. Interesting question. We kind of scratched on that earlier on. If you are changing content so that we are under the impression that you are misleading the user, that would be considered cloaking. Just user agent sniffing and then loading slightly different content. Or for instance, the example I gave is what is fine to do would be to say, oh, this is Google Bot. So I'm not showing a corona information banner. That's fine, whatever. This is not primary content to your website anyways. So you can just hide it out. If you're using it to swap out or redirect from a page that is about kittens to a page that is about cheap drugs online, that is cloaking. So generally, when we have the feeling or when we get content that is wildly different, that is considered cloaking. Where is the border between legitimate content changes and illegitimate content changes, AKA cloaking? That is a tricky question. And you would want to stay away from the murky waters in the middle of that spectrum. So as I said, service had rendering something or hiding a notification from Google Bot. That's definitely not on the dangerous side of the spectrum, completely going somewhere else and actually showing completely different content. That's a risky one. We had the example of translations. What if I auto-translate the content under the same URL? That is in the murky waters where I say it might work. It might not work. I would probably steer clear of that and not risk it. It's a tricky one. But generally speaking, small content adjustments do not run under cloaking. That's where the bilingual comes in. We have some really funky decisions made by companies on which language they want to show you. If a contest is not available in some provinces or some regions in one language version. So let's say you have a website in, I don't know, Canada where you have French or whatever they consider French and English and you have some content that is only available in English. What I would do if it's only available in English and I'm on the French side is I would either say, like have a page that says, oh, this content isn't available in this language or in this province or whatever. How about you go there and have a link to the other thing or just do a 404, but don't try to be clever and don't try to force different content on users where they don't expect it. It's not a great user experience and I'm not sure if Googlebot will handle this gracefully. So you definitely want to be very carefully testing this or avoid this to begin with. I think that's like my guidance on that. Do we have questions in the audience or should I go back to the submitted question? I think extended question to the question before. So I've tried this with my page and we add the 3D structure data. And I see I get no test result with this 3D structure data, but with another structure data. Is the testing tool cannot act at this moment testing the 3D structure data, like the one from Wikipedia with Tiger? I think that is a definite possibility because I think the test currently is limited. I'm not sure where the, we have something saying that there is a limitation to this, but I'm not sure. Ah, here, supported types. So currently we support a bunch of types, but we don't actually, I can put it in the chat if you want to check the chat. That's the piece of documentation that explains which types are supported. What you can do though is, in the structure data testing tool, and actually I'm gonna run this real quick through screen sharing one more time. I will be sharing this entire screen. If you are running this and you don't see your structure data, what you can try is you can, oh, come on, here we go. Use the rendered HTML and you can look for the, wow, thanks. You can A, look for the structure data to be present here, but you can also use it to actually take it from here and go into the structure data testing tool, structure data testing tool. And that one should hypothetically support this better because it has a few more things in support. It should show up here, but even that is not a guarantee that it's correct. You would have to manually make sure that everything that we say is required is actually present there. 3D data is a tricky one because it's relatively new. And I'm not even sure if it is like public public that or if you are in a early access program. Let's see. That's relatively easy to find out. We can go in, wait, where's the structure data gallery? It doesn't look like it is generally available at this point, the 3D data. At least it's not showing up in here. So you might be in an early access or early adopters program. Yes, 3D and AR results are currently limited to people in the early access program. You can use a form to express interest if you're not having done that already. Thank you, thank you. Okay, good. I need to call to my boss and we need to add to the early access program. Then the most of the page is 3D data. In that case, yes. Thank you, thank you very much. You're welcome, you're welcome. Okay, other questions? Oh, a question on YouTube has been answered. Oh, one of the lazy loading questions, that's cool. In that case, I may have a look at YouTube one more time. Hi, does Googlebot have some issues crawling isomorphic pages and does it understand client-side routing or is it still safer to do the routing on the server side and avoid page rehydration on the same URL? Nope, Googlebot does not have fundamental issues with isomorphic pages. Isomorphic pages, for those of you who are wondering are basically just like server-side rendering plus hydration where you run more or less the same JavaScript on the server side as you do on the client-side. We support that. Client-side routing is fine as well and rehydrations are okay too. Just make sure that it's implemented properly and that you test it with our testing tools to see if the content that you expect is actually visible to us in the rendered HTML. You can use the Google Search Console URL Inspector inspection tool. You can use the rich results test. You can use the more friendly test in any of these work and with the rich results test and the more friendly test you can do it on like local development and URLs as well. If you use a tunneling tool like local tunnel or Ngrok or something like that, you can basically plug any URL in and see the rendered HTML that we see. So that's quite nice. This article compiled the quotes from John Miller, Gary, and Martin over time. Some of those quotes seem to contradict each other. There's a guide or an article from Oncrawl on lazy loading and it seems to contradict each other. The guide is from 2019, from April 2019. It's like a year old now. And the contradiction comes from the fact that they have, I actually read the article earlier because I saw this question earlier. If you look at Google Search over time, you will see that it keeps changing and keeps improving and keeps, our mission is to basically make sure that we understand the web that you create. So we keep improving things. That also means that quotes from a month ago, from a year ago, from five years ago might no longer be true. Things evolve. In this article, there are a few things that look like they contradict each other and not necessarily contradict each other. And actually this is elaborated in the question as well. I know lazy loaded images are in principle indexable if done properly, but is it worth the risk to lazy load important images like product images even if they are below default? Yes, it is worth it because if you, especially if you use the native lazy loading, there is no risk whatsoever because it is an image element that has the actual product image as the source. It's just loading lazily when the browser supports it. So even crawlers that don't run JavaScript will get the high quality image, but users on browsers that are more modern will have that support and actually get it a slightly better experience or actually much better experience depending on your network speed and the price of your network, especially on mobile. Lazy loading is quite an improvement and I would definitely recommend it. I would be careful with custom implementations these days because why, right? If some browsers have it and other browsers fall back to less great but still solid behavior, I think that's worthwhile taking or using this opportunity. Also do NoScript actually also is NoScript actually helpful for image indexing for lazy loaded images? Or is it like John Miller said Google ignore set? The quote there is slightly out of context. I think, I actually haven't, I don't remember. I think it's the quote is slightly out of context there because we do ignore content in NoScript, except for images, interestingly enough. For images specifically, we have a workaround that allows you to use images in NoScript and we will index it, but we are not sure how long that's gonna stay that way and if we really need this, if engineering finds out they don't need to do this, they don't need to support this, they might remove it. We have seen with the pagination situation last year that that can lead to confusion. It does happen every now and then there's many, many engineers working on this and sometimes they decide that, well, we get the signals elsewhere so it's not no big deal. Then they run an experiment, find out, yeah, it really is no big deal, but then we have to communicate the changes. So I would shy away from NoScript because there's better alternatives. The alternatives are using latest lazy loading, which is a fantastic progressive enhancing way or using a as robust as possible JavaScript implementation to do lazy loading. So I would not use the NoScript fallback. At this time of recording today, April 8th, 2020, as far as I'm aware, we're still supporting the NoScript work around for images, but that's specifically for images. It does not work for the other things. Cool, questions in the meantime? I've got another quick one, that's okay. When you use the testing tools or anything and render a page and view the rendered HTML, it flattens iframes out in there. Are they considered just part of the content for ranking purposes or do we not know, is that secret source or? It's not secret source. And it is observable in the testing tools too. In certain cases, we will flatten iframe content into the document. That is when the iframe is large enough and when the content is, I'm not sure what the other signal is that we are using to consider it as inlineable. But if it's inline into HTML, it is at least sent to indexing. What I don't know at this point, because that is also a question about indexing and somewhat related to ranking is how ranking slash indexing actually considers this content. It is marked as a flattened or injected or inlined. I think inline is the term we're using. It is marked as inline content. I'm not sure how it's used in ranking. I'm not sure how exactly it is used in indexing. So I would assume it gets inlined in certain conditions. And if you see it inlined in the testing tools, that means that we are at least seeing the content as part of this document. So very good question. Other questions? Then I'll take another one from YouTube. There is a website that is dynamically excluding part of the content on its mobile version using JavaScript instead of CSS as a responsive solution. I read more button. The content is being sent by the server but excluded in the client side. Mobile friendly tests and GSC seem to not recognize that piece of content. Is that correct to affirm that the hidden content may not be read by Google? In your case, I'm not 100% sure what the details of this implementation are. But if you say read more button, then I'm assuming that you mean an actual button that we would have to click on to load the additional content. If that is the case and the content is not there in the rendered HTML, no, we're not gonna see it and we're not gonna see it because there is a user interaction required. We are not interacting with your page. We're not clicking on anything. But generally speaking, if you don't see the content in the rendered HTML in the testing tools, it means that we are not seeing it when sending it to indexing. So yes, you wanna be very careful with that implementation unless you want to exclude that content then mission accomplished. Do we have- Yes. Actually, this implementation, the content is being passed by the server and it's being suppressed in the client side. What does that mean? It's being suppressed. Like dot remove with JavaScript. Aha, okay, yeah, then we're not seeing it. If you remove it from the DOM, we're not seeing the content. Okay, awesome. That's my question because we're discussing if we just pass it by the server, it was enough to render and to interpret it. So it's not, right? No, if you're removing it and we're rendering it, we're seeing the removed, we're not seeing the removed content because we are sending what is rendered to indexing. So the tools show you, we have successfully removed the content you wanted to remove and we are not seeing it in indexing. Awesome, thank you. You're welcome. Okay, other questions? Okay, building up on this topic, what about the content that's on mobile, for example, that is hidden or invisible that, for example, a sidebar that would appear on desktop, but on mobile, it requires a user interaction like a tap on a show sidebar to see the sidebar. But the sidebar content is actually in the HTML. Actually in the HTML, let's say that. So would good or what even consider that content because it's not actually visible until a user clicks on show sidebar, but it's there. I mean, on desktop, it's there as well. It's in the HTML, but on mobile, it's not visible by default. So let me be very careful here. Anything that is in the DOM is considered for indexing. So whatever we render, that goes into indexing. That means something that is invisible, but present in the DOM in the HTML, we will see that. Whatever is in the rendered HTML that the testing tools give you, we will use that for indexing. Now, if it is invisible, we might consider certain cases like for instance, if it is invisible and matches topically, then we would consider it maybe not as important as the visible content, but we would still consider it. So we might not show it in bold in the snippet or something like that, but we will still consider it. If it is unrelated or if it feels it is a spamming technique to basically just add additional content that the user doesn't really see and doesn't really need or isn't really beneficial to the user at this point, then we might exclude it from the actual waiting. But fundamentally, if it is related, useful content, even if it is not visibly visible, as long as it is in the DOM, we will see that. So the rendered HTML is what goes into indexing and then indexing makes decisions based on things like is it visible or not, but that doesn't mean that we are not seeing it or not using this for indexing. Okay, thank you. I've had this question for so long. No worries. And it is a tricky question because a lot of people understand different things when they say not visible is like, it's not visible. And then it turns out they mean not in the rendered HTML. And it's like, well, if it's not in the rendered HTML, no, we're not seeing the content. Whereas like, yeah, but it's, and then you can do like stupid things when you try to hide stuff. It's pointless. Or even if it's, that's a tricky one. If you require a user interaction to add it to the rendered HTML. So like, if you have to click an actual button to load the thing, we would also not see it. And it's not really present then. So there's like gray zones of that. But then very fundamentally, we are seeing invisible content if it's part of the rendered HTML. Thanks, now it's clear. It's finally clear. Awesome. To answer Miriam's question, what's the one question you I'm itching to answer today? I already answered that one. That was the lazy loading guide question was like, oh, there are so many comments and quotes from you all and they're contradicting each other. And I'm like, well, the article is a year old and also the comments were collected over a long, long period of time. That's something fundamental with SEO. The web is moving forward really quickly, which is amazing because that means that we get to improve and build new cool stuff on the web. And Google search is trying to keep up with the web as it evolves. So things change. Things change constantly and the only constant thing is change. But that also means you wanna be very careful when you read outdated articles or when someone says like, when I tested this five years ago, usually whenever I hear someone talking about something where I'm like, that's outrageously wrong. And I ask, so where does this come from? Then usually they say, oh, like three years ago. I'm like, yeah, three years ago we didn't have headless Chromium in Googlebot. We didn't have ever green Googlebot. So since 2018, this has dramatically changed. So the best recommendation I can give you is whatever you hear a quote and you're not 100% sure what's true or if it's true or not, test it. Test it. A good article should tell you, this is what I did. This is where I put it in. So like, for instance, there's an article that explains that Google Tag Manager and structured data does not work with Googlebot. And that article is like two years old and they have a piece of code that you can copy onto your page and try it out. And I took that piece of code and I put it into the structured data testing tool. And yes, structured data testing tool does not show it. Structured data testing tool is not the latest and greatest in terms of using the right infrastructure. The rich results test, however, does use the actual indexing infrastructure and shows you what we are actually seeing when we index your page. And that one showed the structured data and I'm like, aha. So the source of these conclusions is outdated because structured data testing tool is outdated. But because this article was very transparent and self-contained when it comes to how to test the hypothesis that they started with before they came to the conclusion, I could go along and say like, yes, the article is right. That was the case back in the time. It is no longer the case as seen here by taking that exact example, putting it into the actual tools and seeing that it works where it didn't work when the article was written. So always take everything with a grain of salt, including the things that Google has said. If I say something today, it might be valid today and tomorrow, but not in two years. Not in one year maybe. Make sure that you have the latest sources. Our documentation, we are not always 100% up to date but we are trying our best to stay as up to date as possible. And usually we draft the documentation before the thing is launched and when it launches, we update the documentation in one go. If not, then we usually communicate that very clearly. So make sure that you are not jumping to conclusions, test for yourself and try to find the most up to date source of information. Now, another question from the chat. Giacomo is asking for websites like e-commerce that are using structured data for products. When we need a very quick update on availability for a specific product out of stock or something similar, would you suggest implementing structured data in the HTML without JavaScript? Will Googlebot parse structured data after getting the non-rendered version of the website without waiting for the internal Chrome service to render a page queue in the render list or structured data will parse anyway after the rendering phase? The queue is at median five seconds. So assume every page gets rendered these days and that does not make that much of a difference. However, that being said, in the guidelines for structured data and JavaScript, I noticed that we do cache aggressively. That means if your JavaScript needs to update to actually reflect the new change in structured data, which might not be the case. If you use Google Tech Manager, then the JavaScript does not need to update because it's just a data change. If your JavaScript needs to update to actually reflect the new structured data, then you definitely want to make sure that you use proper caching in this case, like something like Long Lift caching with hashes or version numbers or something like that. Because if we use an outdated cached version of your JavaScript that contains the structured data as part of the asset, we might not see that updating very quickly. So I highly recommend making sure that your JavaScript is updating and caching properly using Google tools. Basically, whenever you see an update, you want to make sure that what we are crawling is actually the latest version of it rather than an outdated version. If you don't version your assets, that is very, very hard to debug. Should just to give you an example. Generally speaking, putting your structured data in HTML is probably always going to be more robust, but there's nothing inherently wrong or faster or slower regarding JavaScript and structured data. What's the average expiration day for technical statements from you? Oh, my goodness. I don't know. Sometimes it is years, sometimes it is months, sometimes it is weeks. If you asked me about the Google Chrome version that is being used to render, if you would have asked me that two weeks before I owe last year. So let's say like you would have asked me that end of April, 2018, I would have said it's Chrome 41 that would have immediately changed in May. I would have hinted at that to potentially change soon. But some people then ignore the fact that I say, but this is about to change. And then just quote me on like, Martin says, Chrome 41, which is true. I said that. They're just ignoring the sentence I said afterwards. So be very, very careful when people are quoting me. Generally speaking, just point to our guidance. Our guidance is 99% likely to be the actual source of truth. Average expiration, I don't know. A month maybe? Don't know. Cloak in question. If we use dynamic rendering on a page with infinite scrolling, would it be okay if the static HTML version that crawlers see have href links to the page in it? No, that's fine. That's absolutely fine. If you have pages as in like previous and next or pagination links in the version that only Googlebot sees, that's not a problem. That is okay. That's perfectly fine. That's not an issue. All right, one more question from YouTube maybe. Would you generally recommend implementing dynamic rendering on e-commerce sites that use JavaScript to display products on category pages? My feeling is that this should make indexing more safe and accurate, especially for crawlers other than Google. We've said it multiple times. As far as we're concerned, dynamic rendering is a workaround. I would not recommend dynamic rendering to anyone who can reasonably switch to server-side rendering and hydration because server-side rendering and hydration gives a more robust rendering to crawlers that don't understand JavaScript as well as a better and usually faster way of rendering things for users as well. Whereas dynamic rendering is only useful for bots, especially those that don't run JavaScript. So dynamic rendering is considered a workaround. I would not necessarily encourage people to do that unless it is the most viable option for them until they can consider server-side rendering or if server-side rendering is an investment they are not willing to make or they can't make for whatever technical reasons. But generally speaking, if you come to server-side rendering and hydration, sure, go for dynamic rendering that will definitely have you with crawlers and bots that don't run JavaScript or don't run JavaScript as reliable. There's nothing wrong with it, let's say. But understand that dynamic rendering, even though it's a workaround, does incur infrastructure costs and does mean maintenance on your end and can hurt you if it's implemented incorrectly. Everything that is implemented incorrectly can hurt you, including HTML, static HTML. So it's just more complexity and you wanna be careful if you really need that additional complexity to reach whatever goal it is. If you don't really care about search engines that don't support JavaScript, as far as I'm aware, Bing supports JavaScript, Google supports JavaScript. If your rendered HTML and the Google tools looks fine, at least we are not concerned. I'm not sure how you test in other search engines, but unless you have a good reason to implement dynamic rendering, I wouldn't. Okay, do we have other questions? No, do you guys have, or do you, ladies and gentlemen and others have questions for me? So if nobody has a question, I have a question. Go ahead. Yeah, so it's for the type of cloaking question. So we have 3D data and a table with information of this 3D data on our page. The Google bot is coming to us with the mobile version. I can see in the GSC. So, and the point is in the mobile version, we show the user first the 3D model and secondary he can click on a button and the table is showing and 3D model is displaying norm. So it's a cloaking if we say, okay, both content is in the same time in the content, but if the Google bot comes, do we see first the table? No, that's not the problem. Okay, perfect. All right, awesome. I think it's time for the last couple of questions. Now is your chance. I ran out of YouTube questions as far as I can tell. I can refresh the page. Maybe there's an additional one, I don't know. No, no, no, no, doesn't look like it. No, no one asked. Oh, actually, no, that's not true. There's one more question. Can FAQ schema and special announcement be read and indexed by Google delivered through JavaScript that is powered through third-party tool or platform? Possibly. It depends on the implementation. It depends on if they let us crawl or actually if they let us request their resources. So let's say you are using urexample.com and you are using a service from StructureData.org, I don't know. And StructureData.org says, oh, our API.js shall not be crawled by robots. So they say like, they block it in robots.txt. And what happens then is if we come to your website and your JavaScript or your website loads that JavaScript file, we can't load it because they are saying we can't load that page using our robot. So Googlebot can't fetch the JavaScript so the JavaScript doesn't run and then the content doesn't display. That is a scenario that happens more often than you might think in that. And there's other ways as well like there can be issues on their side. There can be problems with your JavaScript. So you have to test your implementation. Fundamentally, technology-wise, it is absolutely fine to have FAQ schema and special announcements generated by JavaScript following our JavaScript and StructureData guidelines. But you want to test that very carefully. If the test shows you yes or good, then you're safe. If not, then, well, then you want to either talk to your third-party provider or you might want to migrate away from that third party if that's something that you care for. Test, definitely test. Okay. Time for a last question from the audience if anyone has one. You can also write it in the chat. If you don't want to speak up, that's also fine. I'm flexible either way. Cool. Chat has a question. Is there a rendering budget for websites? No. Asking this question related to my previous one because I can see that using Google Tech Manager for StructureData and the big e-commerce site after a Googlebot visit, we have not the StructureData updated. This is also for more than a week using HTML version, we get the data refreshed way more faster. That is possible, but there is no such thing as render budget. This can be a caching issue possibly. Without looking at the specific website, I can't really make a judgment on that. But if you are seeing issues, you're very likely running into caching. And then if you really care very much for the data to update very quickly, then either host and version the JavaScript yourself or consider having it in the HTML. But again, caching is king here. And also, if we are not crawling it very often, then I don't think that's gonna be faster. But I think from what you say is that we are crawling quite frequently as the HTML does have the update whereas we don't when it's using Google Tech Manager. So I guess Google Tech Manager might be also affected by caching, which would surprise me. But it's not impossible. Okay, excellent. I would like to thank you all very much for these fantastic questions and joining me here live as well as on YouTube to give us all your questions and discuss them here in this forum. The next JavaScript SEO office hours will be in approximately two weeks time. I'll update the YouTube community feed with the actual dates and I will also post the link there again. Thank you very, very much for being fantastic. Stay safe, stay healthy and all the best for you. Hope to see you soon again. Bye-bye. Bye-bye. Stay healthy. Bye-bye. I'm my best. Bye-bye.