 Hello and welcome to the JavaScript SEO office hours in January 2021. Ooh. I hope that you all had a great turn of the years. I think that's what we say in German, but I'm not sure if that's an English thing that you can say. I hope you had a good start to the new year that you're all safe and sound out there. I'm really happy to see that it has a bunch of really good questions on the YouTube post site for this Hangout. I'll also obviously invite my two guests here in the live meeting to also contribute, but let's go through the YouTube questions first. So first question came from or comes from Jane. Jane asks, how often does Google or crawl or render JavaScript for indexing? Is it the same frequency as crawling HTML or is it less? Really good question because it's a non-simple and non-trailer answer. So the very short couple of words version of it is yes, Googlebot crawls and renders JavaScript for indexing as much as it's crawling HTML because pretty much it's like an entire workflow. So it goes in, the crawl happens and then the rendering happens and then the indexing happens, roughly that. If you want a little more detail, I'll happily provide more detail. But if that's the level of understanding you are looking for, stay, stick to that. It's basically the crawling happens and then the rendering happens as well. Interestingly enough, there might be reasons for us to render more often than we crawl. Sounds weird, but hear me out. We might experience situations where we crawl the HTML and then render and then rendering for some reason has some sort of problem, not even related to your JavaScript or your website specifically, but to our infrastructure. So we might render a second time. That's totally transparent. So it's not something that you need to worry about or you can influence. It's just, it happens automatically. It can also be that if the crawl brought a lot of resources that might change in the meantime, we might actually then basically crawl the sub-resources as part of render. So we might render more often than we crawl. But in general, we render as often as we crawl. Sometimes, however, we might crawl more often than we render. If, for instance, another part of Google search wants to validate something or check something or if there is crawls going in to see if something has happened on the content, then we might crawl, decide nothing has changed and then not render hypothetically. That is the situation that can happen. So we might not render as much as we crawl. It can also be that we crawl, we make the request and something goes wrong before we can render and then we have to re-crawl. Again, in this case, we would also crawl more often than we render. Again, in general, the rule is we crawl and render similar amounts. So whenever we are crawling, we're also rendering afterwards. So that's not something that you need to worry about or need to take action on. But to answer the question completely, I'll give you a little more background information which ends up saying like generally, yes, the same amount, but there are cases where we might render more often and we crawl or we might crawl more often than we render. But again, that's not something that you need to take action on or need to be aware of or need to worry about. Next question comes from Kevin. Kevin is asking how they should apply metadata to 360 degree photos for Google My Business. That's a question for the Google My Business support forums. That's not something that I know or I'm aware of. That's not something that is specific to search or JavaScript. So that's something that you need to talk to the Google My Business folks about. And then it's like, can businesses get their products on Google Lens? That's a Google Lens question or a Google search question. Kevin, I'm sorry to not be able to answer these ones a little outside of my area of expertise. Harris or the Harris, not sure, is asking, does Google really execute JavaScript? Yes, it does. I mean, whether Google Robot reads the source code or it renders the page on the client side and then reads, we do render the page. We actually render the page just like your browser would. So we execute the JavaScript and whatever gets injected into the DOM by the JavaScript that can be taken forward. You can see that by looking at the rendered HTML in the URL inspection tool or the rendered HTML in the mobile friendly test or in the AMP test. All of that includes content that is generated by JavaScript because we rendered that properly. And Harris has a second question and actually a third one as well. So how does Google deal with dynamic content by JavaScript which changes with some condition like user clicks, location, browser, or timing? You can find that information in our wonderful documentation at developers.google.com slash search. Googlebot does not click or scroll or whatever. So things like location, where we reject location requests so you can't really get location in Googlebot or clicks would not work with Googlebot because we don't carry out clicks and we don't give you the user's location in Google Search and Googlebot rendering pages. If it depends on timing, that might actually work because the way that time works is different from how time works in the real world when Googlebot accesses your website. You can use the URL inspection tool or any of the other testing tools that we provide from the search side of things to see what's in the rendered HTML. If your content shows up in the rendered HTML, then you'll be fine. How does Google work with PJAX? And that's a jQuery library load HTML content without loading and changing the address URL. I think what they mean by PJAX is like an older technique where you kind of have a single-page application where the JavaScript just replaces the content on certain actions. I'm pretty sure we do not see that because, again, we're not interacting with your content. So if it's only loading different content on a click rather than just on URL, that would not work. You can see that described in the best practices guide that we have for JavaScript. Again, developers.google.com. There you find more information on how JavaScript is processed. Next question comes from Lorenzo. Lorenzo asks, how can I be really sure that Google is accurately rendering all of the content on the page? Doing the screenshot from the URL inspection tool in GSC and from the mobile-friendly test, I see that a lot of JavaScript elements are not loaded. Instead, the rendered HTML code is perfect. Can I be sure the perfect rendering only by checking the HTML code with these tools? Yes, you can, because that's what we use for indexing. So we don't actually use any images to look at what the page looks like or anything. We care about the rendered HTML code and some of the layout information. But fundamentally, if it's in the rendered HTML, you will be fine. That's what you should be doing. Could I be sure also viewing and checking the versions of my web page is cached in Google search results? No. The cache is not meant to include JavaScript run content. It is an outdated old feature that we just keep around for convenience, but it might not follow the pipeline as well as other things. The testing tools and the rendered HTML is what we want to look for. Bagwari asks, how are Wix websites doing in terms of rendering? Wix uses a lot of JavaScript for almost everything, and will a lot of JavaScript affect my crawl budget? There's a few things in these questions. So I'll run through these. How does WebWixize page render, just like any other website? You can check it in the URL inspection tool and all the other tools that we provide to check and see in the rendered HTML. If your content is there, you'll be fine. A lot of JavaScript may have an impact on your crawl budget depending on how it's built and what it does. Generally speaking, if you have less than a million pages or more, crawl budget shouldn't be a problem. So I wouldn't worry too much about it. If you have a smaller website on Wix or a smaller website somewhere else, then crawl budget does not really impact you. If there's a lot of JavaScript files, all of these have to be fetched to render them. These fetches count towards your crawl budget. The other thing is if the JavaScript then makes additional network requests to fetch the content like API requests or fetches additional images, all of these count against your crawl budget as well. But again, unless you have a really, really big site, that does not really impact your site too much. Mustafa is asking about regarding partial indexing on the pages. How to quality assure our URLs and get to know what parts of the page is Google's indexing or not. Partial indexing is a tricky one. We are not really partially indexing things. We are having an index where all the information from the page is in there. With partial indexing people sometimes, I think this might be my fault because I think I at some point said partial indexing, which is not true. We are indexing the entire page. And then we are looking at the content to find the bits that are more important than the other bits. So if you have a youth page about making an apple pie and then on the page somewhere down there, there's also a story of your cat. And we might decide that this page is really good for apple pie. But the cat part is not as relevant and not as interesting. And so the page might not rank for cat, but it might rank still for apple pie. But then sometimes people are searching for the cat and are like, oh, it's not in the index. So the page was not indexed. It was indexed. It's just not the content isn't ranking because we don't think it's relevant enough to rank. Passage indexing, sorry, passage ranking is officially called passage indexing, but that's actually really passage ranking. Might change that a little bit where we can then say, oh, actually, this part about the cat is also really good. We can rank that for cats as well. But that's not something that you can really check. And I don't think you have to worry about that as long as your content shows up properly after rendering and the page is indexed, you'll be fine. And then you can basically try out the different search terms and search queries that you're interested in and see what happens in them. And also use the performance report in Google Search Console to find out for what you're showing up to. And if that is not what you want to show up for, then you have to see how to make the content more relevant for the terms that you care for. Lino, a lovely, fantastic, ridiculously clever product expert from the Webmaster Forum is asking me as well. And he is saying, hello, I'm testing or trying to capture JavaScript errors generated when Google loads and renders JavaScript. I use an event listener to load Pixel when the error is detected. And I add error messages to the URL of the image. After that, I can hopefully analyze my server logs and get the messages and save them. Is there any other way to get error messages? I think there is no other easy way. You might be able to get something like, oh, what's it called, sentry or, oh, god. I can't remember what they are all called. Greylock might have a thing. So there's different logging libraries that can send error information out to a separate service that shows you the information that might be possible. But I think your way of doing this is not bad. That sounds actually like a good plan. Smart, interesting idea. Next question comes from Marco. Marco is asking, hi, Martin. There is a rumor that Googlebot does not like content and blocks which have SEO combination in class. It's like diff class SEO. And this content is treated less valuable than content and blocks without class SEO. That is a myth, yes. The question is, if that's a myth? And the answer is yes, that is a myth. Interesting, though. The next question comes from ict or ict85. Is lazy loading of a news channel page with a lot of news articles constantly changing subject to SEO? All content is currently communicated to Google with sitemaps and by index API later. If yes, what are the pitfalls, what are the best practices here? We actually do have a best practices guide for lazy loading. But generally speaking, if the news articles independently are submitted to our sitemaps or RSS or index API and we can index them, that's not a problem. We might just not rank this one specific page with lots of things in there. So I wouldn't say it's very relevant to the SEO. If you want to make that page relevant in terms of search results for some reason, then, again, developers.google.com.search and then under guides, there is a guide that explains a little bit about lazy loading of content. But you don't really have to worry too much about that, to be honest. Miguel has the next question. We display different main navigations for desktop and mobile. I'm considering the use of JavaScript to remove hyperlinks from the DOM for links that are only available on desktop for mobile users. Is there a benefit or risk? I think from the SEO perspective, it doesn't really matter that much. We probably will only index one of the two versions, depending on if you're mobile first or not. We might still crawl and potentially index the desktop version. We might just not highlight it that much in search results. The risk is that if the websites, like if you remove links that are nowhere else, then we might lose the semantic information on how to navigate between the different things there. But if it's like in the sitemap and there's lots of other places to reach the content from, I wouldn't worry too much about this. And now I shall reload the page to see if we have additional, oh yeah, there's like even more comments coming in. Yeah, someone is asking for the Google Meet link. That's funny. I know that this happens because the way that YouTube is ordering comments, unfortunately. It is happening. So two more people want to join and can't see the link, it seems, but I'll happily help them. So we might see them join in a moment. But thank you, everyone who submitted their questions. So YouTube, this is great. These were great questions today. I'm really, really excited to see the quality of the questions constantly improve. And I now know that I have four people in the hangout. If anyone has any audience questions, now is a great time to head into the audience questions. I'll get the ball rolling, break the ice. Excellent, thanks, Dave. Also, by the way, brilliant product expert from the webmaster forum. I'm a little nervous about your questions. Let's see. All right, yeah. It's a bit back to what you were saying earlier, that sometimes you render something more than you'd crawl. How would that impact, say, if I'm making an API call? Say I had a page almost of that news thing and it was calling an API, pulling in the latest articles. Would there be a point where you're going back and fetching that API and re-rendering from what you've already gone or would that only ever be triggered if you went back and crawl the main page? Or would you? That's a very, very good question. So the way that it works on the pipeline is the crawler basically crawls the resource, like the main resource, the HTML, fetches that, caches that, and then the rendering happens to use the crawler to fetch all the resources. They also go in the cache. So that being said, if the rendering for some reason has to be restarted or is done at a later time, again, for whatever reason, we would go through the cache first. So whenever it's, especially when it's get request. So you said like specifically API requests. If that is a get request, so it can be cached and it happens to be in the cache and the cache happens to not have expired, we would just take what we had in the cache because the general idea of the rendering process is to produce as consistent renders as possible. So the idea is like given the same amount of data that we got from crawling at some point, we want the same output, which is why things like random number generating or time and date work slightly different. You might actually see us operate with data and timestamps that are coming from when we crawled rather than when we actually render. So to keep this consistency, we use the cache fantastic. But what if it is a non-cacheable request or the cache has expired or the cache for some reason doesn't have this request? Well, in that case, we would send the second render that would run would send the request to the crawler, the crawler would fetch the data and then it would go into the cache or if it can't be cached and it would skip the cache. So you would actually see the render then updating without necessarily seeing the crawl. You can probably see that in the server logs because you would see the request coming in from the crawler for one specific resource. But again, we would try to reduce that amount of additional fetches or fresh fetches because we want the renders to be as consistent as possible and to hopefully only update when we also do the crawl run before. But that might happen, especially with post requests that we can't, if it's an API request that is a GraphQL request that is using post, it might actually fetch fresh. So it's not something that you would think, oh well, I've learned this page only ever really updates by doing a post request for this. We'll just do that and then every time we get a GraphQL, it's more in the scope of something on a bit wrong last time or back in that batch. Yeah, exactly, yeah, yeah. That's pretty much what normally happens, yeah. Okay, thank you. You're welcome, awesome. Any other audience questions now that we broke the ice? I don't want to do the teacher kindness. Oh. Hi, this is Miguel Martinez. I know this person, this probably isn't the most pressing question, but I had asked about considering using a JavaScript to remove hyperlinks from the DOM for links that only appear on our desktop version for mobile users. There's some discussion within my organization about whether or not it's a good idea versus a bad idea from an SEO standpoint and if Google would even run the JavaScript as part of the crawl. So this is going to be interesting on the recording because we answered this question earlier, but I'll very happily answer again because I would like to hear if that's a sufficient answer for you or not and it's always tricky to guess that from you. So, I apologize. No, no, I'm good. Do not see the link. All is well. I know that that's why I'm very happy to take this live because it's YouTube's formatting or sorting of the comments. They have like this top comment by default and usually when you submit links they are not rated as top comments and then people don't see them unless they switch to the, what's it called, to the new comments first ordering. It's a little weird. Anyway, so we do execute the JavaScript so we'll very likely see whatever comes out of the JavaScript run. So either if you remove the hyperlinks or add hyperlinks of JavaScript we will see or not see them depending on if you removed or added them. In general, it shouldn't really matter that much. You can do that change if you feel that it brings you other benefits than SEO benefits, I would probably do that if you for some reason say like, oh yeah, but it's better for the users then that's a decision for you to make. There is no inherent benefit for SEO purposes. There is a potential risk for SEO purposes when the pages that you just removed the links to are actually no longer or basically are orphan pages and we can't actually get there from somewhere else then we might have a hard time to put them back into the rest of the structure of the website even if they are submitted in the sitemap. Sitemaps don't really give us hierarchical information so that's the thing that is a little tricky but I don't see navigation links necessarily as being like very much superior over sitemap thing because that's again, if it's in the navigation it's on every page and that again is kind of like non hierarchical really. Yeah, unfortunately it's a flat site architecture very dependent on a legacy platform and it's to the magnitude of about 1300 hyperlinks difference between the desktop and mobile versions. Wow, so if these pages are in the sitemap and you would rather like keep the mobile menu lean and remove these 1300 links and I understand why you want to do that I'm pretty sure this has a performance impact on mobile devices then I think this is fine just be aware that we might be like, huh, okay, so all of these pages no longer link to this other page so they might not regard them as highly anymore so it does update the link graph but yeah, just be aware that it does that and if the SEOs on your team are like, yeah, this is fine then that's okay, that's not a big deal. They might think differently though I would understand if they're like, no, we kind of care about the links being specifically here. Well, I do appreciate that feedback that's very helpful to us. Good, that's very, very happy it makes me very happy to hear that that was a useful answer because of these questions like I can answer them in a shallow way or in a deep way and I'm never like sure which way to go unless I have the person in the audience that's fantastic, thanks for joining in that and asking. Awesome, so we had Dave, we had Miguel who else has a question? Few people hanging out, just hanging by. All right, I'll check the social networks one more time and see if someone else said something. Oh, a question that came through Twitter not specifically for the JavaScript SEO office hours but good question nonetheless. Someone was wondering Ryan actually was wondering if there is a different user agent between what we actually do when we crawl versus when you use the live test versus when you use the mobile friendly test. No, they're all using the same user agent. We might, depending on a little bit on which route it takes to the pipeline that user agents might differ a little bit but it's not per tool. It is generally just the way that the crawling infrastructure works. And then they asked a follow up question like but why is it that in one tool it looks slightly different than in the other? Well, for one, if one tool shows you the desktop version like the URL inspection tool if your site isn't mobile first yet, mobile first indexing enabled yet you might see the desktop version where the mobile friendly test always shows you the mobile version for the obvious reason that it's tailored towards mobile. The other thing is because of the way that the infrastructure works in the real indexing pipeline we heavily use caching but with a live test like in the mobile friendly test or the URL inspection tool, a live test you don't want to see the cached version you want to see what would happen if we would fetch freshly. Which also means that we might run into timeouts that's not because of you that's because of us the way that our infrastructure is built is not really meant to be doing like live fetches which we are then effectively doing for the tools. So sometimes some resources time out and then the outcome looks different. So even within the mobile friendly test or the URL inspection tool running the test three times might give you two different results or three different results depending on how the requests go through. That's not something you need to worry about that's just something to keep in mind sometimes you get false positives as in like false things that would get you alarmed about oh this part of the content is missing and it's just like oh actually the resource wasn't fetched this time. You can always look at what we have in the index using the search console if you use the URL inspection tool and look at what we had in the crawl page you see the rendered HTML from what we actually did when the infrastructure was running as it was intended which is a batch processing over a longer time. And then you would see if the content is there or not. That's just something to keep in mind. All right, anyone else? John, yes. I do have a question but it's actually not JavaScript related. I'll try. Okay, and I'll put my camera on to join you. If we have internal links that need to have tracking parameters in them more for political reasons than anything else at this point. And obviously it's an issue that we've been running into and seeing the tracking parameters appearing even in our Google's index. Is there anything else we can do on our end to kind of help steer Google in the right direction to avoid those pages? You can, so, okay. Are these pages that you definitely want in the index just without the parameters or is it like you don't want them in the index whatsoever? We enter category pages, like top level pages. We want them in the index. We just don't want these. You can use the URL parameter tool in Google Search Console. That's one thing that you can do. The other thing is you can canonicalize. You can give us a canonical saying like this without the weird parameters is what we consider canonical. And if that is a strong enough signal then we would pick that up and actually pick that as the canonical. So that would be what we would show in the search results rather than the one with parameters. And if you have it in the sitemap and in the canonical that you specify without the parameters, and we see like, okay, so the parameter one that is linked here is the same content as the non-parameter one, and then we would collapse them. And then basically it's very likely that we would canonicalize the one without the parameters. No guarantees made, but that's the approach that I would try. Okay, that's what we have in place. But I think our issue has been that like our top navigation links to pages with tracking. And everything shared in social and everything is always to these tracking pages. Yeah, that's unfortunate. Then that's a strong signal to use them as a canonical instead. All right. Well, thank you. Yo, you're welcome. All right. So Miguel, Dave and John were brave and asked questions. Anyone else having a question? You can also use the chat feature if you don't feel comfortable voicing your question using voice chat or if you don't want to turn on your camera, that's fine. There's a chat in this lovely hangout that you can use to also ask questions. Miguel, another question? I have a question. Oh, awesome. Yes, please. So hi everyone, my name's Ian. So I know this is more centered towards JavaScript. I'm not sure if that's the problem that I'm facing with a particular website but it very well could be. So let me give you some context. I'm working on a website that we would consider to be a large website. It's got about like 25 million pages on it. They have been on Google for quite some time but they have never been able to get more than say, I want to say 700, yeah, about 700,000 pages in the index. So the last thing we did was we compiled all the pages. We made like site maps. We made a site map index. We submitted that and then waited for a little bit. And then eventually in the coverage report, it showed up and it basically said 24 million of those pages were excluded. And then after I waited till the coverage report was updated, which was a couple of days ago and then I drilled down into it and it says that these are discovered, not currently indexed. So I wanted to see if, first of all, if I could get any other information about what that condition tends to mean. And then following that maybe just talking a little bit about how this site is organized because it does use JavaScript in a lot of different places and maybe it's because of the use of JavaScript that it's sort of got all these pages being excluded. Right. First things first, it's not very likely that JavaScript is the reason but I'll come to what might be the unlikely thing with JavaScript in a moment. Discovered currently not indexed means that we have seen it. We might not have gotten around to crawling it that can have various reasons. Like it can be that we have never actually looked at the page yet because we didn't get around to it. Why we haven't crawled them? Well, there's like plenty of reasons for that one in itself. It could just be that your server is responding in a way that makes us believe that we shouldn't be requesting too many pages in one go. It might be that we are predicting based on the pages that we have in the index that we don't give it as much priority for whatever reason. Turns out the web is really, really big. So like we have to start somewhere and that means that some other pages might get the attention later. It might also be that we have crawled them that that would usually say like crawled but not indexed. So Discovered usually just means like we are aware that this page exists. We haven't visited it yet. We haven't gotten the content yet. And that's exactly why I don't think JavaScript is your problem because if we have never crawled it, that means we have never even looked at the HTML. So it doesn't really matter if there's JavaScript involved or not, if you didn't get around to these pages yet. Could it be that we have, or should it turn out that we have crawled them but not indexed them? Then that is maybe a JavaScript issue in one specific case, which is that the JavaScript somehow does something that prevents us from seeing the content. That can be verified or falsified by putting the URL into the URL inspection tool and running a life test where you get the crawl and the render and then you see if the content in the rendered HTML looks like what you would expect. If the rendered HTML has all the content that you would expect that then it's not JavaScript's fault because JavaScript is only involved in the rendering phase once we have the content JavaScript doesn't really matter anymore. If you run into the situation of crawled not indexed this might also just mean that we have found the content not to be so relevant that we would consider including it in the index because we don't include everything in the index, not every page goes into our index. That's just the reality of it because it turns out the cloud also has limited amount of storage. So that's something that you can then only figure out like what content do you really care for and then work on these pages specifically to improve them. Okay. Yeah, because I've sampled pages from the 24 million and just put them in the inspection tool and they came back fine. So yeah, you probably raised that JavaScript issue. Some of these pages, yes, are thin. It's a website that's devoted to legal documents. So yeah, I mean, we have a bunch of pages that list all the companies that are on a case, for instance. And then we have pages for dockets. We have pages that are just like big long lists of PDFs. It's sort of like a court listener, which is kind of technically like, I don't want to say competitor, but somebody in the same space. And so it's like, I guess we're just trying to figure out how court listener has like millions of its pages already indexed and why it's just taking so long for us to get our stuff indexed. So I mean, like I said, the 24 million just popped up on coverage about like a month ago. And it's slowly like going through like 10,000 pages every day, it goes up like 10,000 into the green. So I'm wondering if it's just like you said, taking its time. And if that's the case, my only other question is, because this website, like I mentioned before, because this website has been, it has had pages indexed in Google for quite some time, is it possible that Google kind of just said like this entire website really isn't too important and like just started like assigning like a very small for all budget to it. And if so, is there a way to tell Google that, no, we're really taking this website seriously, it's mobile friendly now, et cetera, et cetera. And can you please give it more of a budget? That is so, obviously we can't just give everyone the opportunity to say like, I'm serious about this, give me more budget because then everyone would do that, which is exactly the problem we have with the priority attribute and site maps. There was used to be a priority attribute and then areas is like, this is the first, everything on my page, all these 25 million pages are first priority, which means none of them are. So that's not how that works, but what you can do is in search console, you can have a look at the crawl statistics if you go into settings, crawl stats, you see how much we crawled and you also should see a trend line. And if the page has been lingering for a while and we have seen like, oh, there's not that much content here, so we don't have to assign it a large chunk of crawl budget, then you can try to update your sitemap XML file to say like, hey, all of these pages have updated, would you like to come back and have a look? You can try to request indexing on the pages that you care most for, that's not really a thing that scales well, but that's a way to at least make sure that some of the more important items are getting into it. And then over time, we'll figure out, oh, yeah, this is actually useful, good content. And then the crawling would very likely also increase given that there are no server issues that would suggest to us that we should probably not be crawling as much. Again, the crawl stats report is your friend here telling you like, oh, there's issues with, I don't know, like 500s or 504s or 502s or 400s or 401s or something like that. 404s are not really a problem that just tells us that the link does not go to where it should go and that's not an issue per se, unless obviously it's a case where the URL is not a 404 and you want that included. But yeah, you can have a look at like, what does the Google crawler see? What does the Google crawler try to accomplish? And then you can like nudge it with sitemaps and probably also nudge it with, yeah, like keeping your important content well linked within the pages and like request them for indexing to get them called again. Okay, no, no, thank you, I appreciate that. Yeah, I'm looking at the budget right now. Yeah, it says it doesn't have any problems with responses. Server status is green. It's interesting when the 24 million appeared on the excluded report, there is like a spike in traffic for a total call request went up to about like 57,000 one day, 58,000, and then it just dropped off again. So I guess it's kind of like maybe Google is aware of it and is just processing it slowly. Yeah, it takes time to process the entire web. Also out of curiosity, as you are already in the cross sets report, what's the average response time? Average response. If you go up, you see like total call request on download size and average response time, that's the interesting one. Yeah, it says 888 milliseconds. Okay, so 0.8 seconds, that's okay, I would say. That's not a bad metric. Good, okay. Okay. The quicker the better. Yeah, thank you. You're welcome, very happy to help. Ian was it, right? Yes. Awesome, thank you very much Ian. All right, and also actually Ian, very nice of you to like use the raise hand feature. I really like when people do that, I should encourage them more often. And now Kevin, yeah, it's really, really cool because that makes it easier. Like people usually don't talk because they worry about like talking about someone over someone else and then the raise hand feature is a really nice way to counteract that. And I see that Kevin has raised his hand. Hey Martin, thank you very much for getting the Google Meet and doing these weekly. I know it's a lot just to really get this stuff going and yeah, anyways, so to my question, I did post it in the forum and I'm a Google Street View photographer for a lot of people and I've been doing it. Uh-oh. On, go ahead, what was that? Ah, okay. No, it was like you were breaking up but I know that you asked the question about the 360 photos, right? Yes, so the question was because I was rereading what I had read and it doesn't really clarify it either but with 360 photos on the Street View, especially on their Google My Business listings and I don't know if you're the exact person to ask but I know they don't really have a Street View team but I thought you might have some clairvoyance on kind of what my issue is. I've had people tell me that metadata on Google Maps is nothing with 360 photos that is stripped once it goes there but then the vice versa, if I'm taking a 360 photo in the middle of someone's showroom floor, especially like around their tables or chairs or something and I'm creating a virtual tour for their website would including all of the actual items inside the photo obviously the ones that are very easily seen. Is it worth putting the items description or the items names inside the 360 photo? I'm so sorry. Thanks for the very elaborate explanation and question but I'm the wrong person unfortunately. I really only know about Google Search. I have no idea about Google My Business or Street View for that matter and to be honest, I have been taking 360 pictures just out of fun and curiosity because I have a 360 degree camera and so for people that I know and who happen to have a shop I sometimes wonder by and take a picture for them so that they get a little bit more visibility in Google Maps but that's like a hobby thing and I don't know who is the right person to address this. I wonder if there is a support forum for Google My Business. It was a part of the local guide community but unfortunately they dropped that and they're saying well the Street View program is getting completely reworked at least from my understanding, right. Other than that, they really haven't clarified it. There is Facebook pages but it's all a community brand. I don't think anyone from actual Google or the even Maps team is really on there. So if you go to the ask the community thing like the forum kind of situation that they have in Google My Business in the Google My Business Help, there are, yes, it's a community driven effort but there are community managers from Google who look into these questions as well and community members of the rank of product expert can escalate this to the community managers and then it should get routed in the right direction. The alternative is you can also use the send feedback about our Help Center button in the Google My Business Help and ask a question. I have asked a few questions in the G&B Help Desk because that's kind of what my business is. I usually do 360 photos for people kind of corally with the season and I do the Google My Business Management for them. Right. And so usually fixing people's mapping addresses or just kind of affiliated, okay, your map points not over there, it's over here, a lot of what a local guide does at doing it for a variety of businesses specifically. Yeah, I'm very sorry that I'm the wrong person to ask unfortunately. No, that's all right. I didn't know if you knew anything about the virtual tours on it as well for a website. So if you're implementing the photos inside their website, if that would change anything or even help with their SEO. It doesn't really do anything on the search side as far as I'm aware. Okay. All right. Thank you. Sorry. I'm trying my best Kevin, sorry. Thanks for asking. How dare you? No, you're good. Thank you. Awesome. All right, Miguel raised his hand. Hi, yes. My colleague was really intrigued by your response and wanted to know if you believe that removing the links that are not accessible or visible to users is a good practice. I mean, if they are not visible to users, yes. If they're not accessible to users, yes, I will also probably remove them. I wouldn't necessarily say it's a best practice, but I don't think we value these links as much as people think, because if users can't reasonably reach them, then I'm not so sure that Google would necessarily value them highly. That gives us into a really tricky world of like, what does accessible mean and how does Google determine that? And I can't really answer that one. That's fine. But I think only having links that are meant for users to actually navigate is a good approach, yeah. Thank you. And if I might be so bold, in relation to the JavaScript, are links loaded only upon user action using JavaScript called, do those count as part of the link quota on the page if they only load after the JavaScript action? There's no link quota. Not link quota, but as part of the crawl. Yeah, no, because if they only exist after a user interaction, we can't really see them, right? Like we are not interacting with the page. So if they say like, you click on something and then JavaScript generates a bunch of links that haven't been there before, Googlebot would not really see them unless they happen to already be in the source code and they're just hidden and then they're only made visible because then we would see them in the HTML and then we would also follow them very likely. So like an ARIA hidden versus ARIA hidden false? Yeah, that wouldn't help because then they're still in the source code and we extract the links before we even render the first time and then we extract them again after we render. So we would probably find them before we render because they are in the DOM. Great, thank you. You're welcome. We might choose not to follow them but we will see them at least. Okay, excellent. Anyone else? We have seven more minutes before I need to wrap this up. Feel free to use the chat or the raise hand feature. I see that there's like lots of love for the GMB office hours idea, I'll forward that to the team all I can do. Actually, I'm not sure but Alan Kent from our team works a lot with the Google My Business folks. So I think it's A.J. Kent on Twitter, A.J. Kent 99 or something on Twitter. If I remember correctly, let me see if I can find him on Twitter, A.J. Kent 99, I think it is but I might be wrong. And then I'm like, oh yeah, no, that's not him. Oh God, okay. So I nearly threw someone in the bus has nothing to do with this A. Kent 99. I'll post it in the chat as well. That gentleman is from my team and he works a bit with the GMB team as well. So you can voice the desire for GMB office hours and that in his direction or ask him because he does the e-commerce SEO office hours every now and then or he used to do them. I'm not sure if he still does them. So you can like pop by on his Twitter account and ask him if he still does the e-commerce office hours and what he thinks about GMB office hours specifically. That's something that I can at least do. Thank you. I will definitely follow up with Alan as well. You're welcome. Happy to help. All right, as no one else is raising their hands or using the chat to post questions, I would say thank you. So no Anastasia, it is not. You can use the office hours here right now to ask a question, even using chat, that is fine. You can use the webmaster forum or you can use public Twitter, not direct messages. I cannot provide private support. It has to go over the public channels for the obvious reasons that we don't want to give preferential support to any one party. Right. So unless Anastasia does have a question for chat, then I'll happily wait for that to arrive. If that's not the case, then I would say thank you very, very much for joining the office hours. It has been a great pleasure answering all the questions from YouTube and here in person. I hope that you stay safe and healthy and have a great time and hope to see you again in I think two weeks. All right, bye-bye. Always the awkward fumbling for stopping the recording. Here we go.