 All right. Hi. Welcome, everyone, to today's Webmaster Central Office Hour Hangout. Today, we're live from New York with a bunch of guests. Awesome SEO guests from nearby. And Martin from Zurich. Where it's a bit of a special time, so perhaps we don't have as many people live. That's fine. Maybe you can introduce yourself? Sure, hey. My name is Cuisin. So I run all performance and organic marketing at an agency called FIG near Soho in New York. Dave Binchala, SEO person focusing on large city commerce right now. My name's Marie Haynes. And I have a small agency where we do mostly site reviews. And this is such an honor to be here. I was talking, John, I've watched these hangouts and took notes. I have over 500 pages of notes from these hangouts. So thank you for having me. I'm Lily Ray, director of SEO at Path Interactive in New York City. Chris Love, I'm primarily a front-end web developer. But I'll do one more SEO for last year. So I'm free on to be here as well. Cool. OK. So we have a bunch of questions that were submitted. But maybe we can get started with some from you all. So you want to do the honors? Please. OK. OK, so in this forum and in others as well, whenever someone has a question about, I read this piece of advice or this guidance in some Google documentation or some hangout question that I've asked. And often the response is, test it. That could mean go to Search Console and see how it renders. Sometimes it also means just try in a small area of your site and see how it goes. The complexity in that second scenario is a real friction point for some folks in some sites because they might be dealing with small traffic samples or templates that it's either on or off. They don't have the right testing infrastructure. Or perhaps they don't have the right testing competence in-house to be able to tell the business when we did this thing. It definitely had this impact. The other things to contend with that are externalities are some of these queries might be seasonal. So things might change just from search meant. Maybe you all made a small change to the algo that affected some query or queries that are important to the business. And it's hard to disentangle whether you folks did something or the test did something. So I was wondering if you had any more guidance or any more point of view on testing in a way that SEOs can use to be more precise and be more confident reporting from business. We tried this thing, and it made this impact. And we did it in a way that's sure-fired, sort of the way that Google would recommend. Oh my gosh, starting off easy, OK. So from my point of view, I try to separate out the more technical type things from the quality type changes. So anything that is really a clear technical thing, you can pretty much test, does it work or does it not. It's not a matter of if it kind of works or it kind of doesn't work. But a lot of these technical things, especially when you're looking at rendering or when you're looking at can Google actually index this content, that's something that either works or doesn't. Where it gets tricky is everything where it's about its indexed, but how does it show up in rankings. And I think for a lot of that, there's no absolute way to really test that. Because if you test that in an isolated situation, like you make a test site and you set it up with whatever recommendations you have, you can't really assume that a test site will perform the same way that a normal website will. There are sometimes simple things like if it's a test site, then maybe we won't do full rendering because we think it doesn't make sense to spend so much time on this test site. Because nobody looks at it and never shows up in the search results, why should we bother putting so much resources behind that? Whereas if you did that on a normal website, that would work very differently. So that's something where I don't really have any clear guidance on what you should do or what you shouldn't do. It seems more like the kind of thing where you look at the general trends where you see that website is showing up, kind of the changes in rankings that you're seeing for those queries and try to apply the best practices there. So maybe if a concrete example, maybe he used that to... Maybe that would be helpful. So something like a title tag test, right? If you were doing that, what, if anything, should we look for? Or is there anything to look at to disentangle this is due to our test or this is due to something changing about the SERP, the Algo, the competitor dynamics? Assuming that we're doing other things to look at the likes of nowadays. I think a title tag change is actually pretty complex on our side in that they're kind of the interplay of does Google use a title tag that they're actually providing on the one hand for ranking, on the other hand for showing it in the search results like for desktop and mobile, we have different amount of room. So we might show slightly different title tags. Users might respond to that in different ways. So you could be ranking the same way, but users might think, oh, this is a great page. I will show it higher. I will click on it in the search results because it looks like a great page. And then you have more traffic, but the ranking is actually the same. So is that like a good thing? Probably, I guess. If you're just looking at the ranking, then that would look like, well, you didn't change anything, right? We just got more traffic. I think let's look at the amount of unique queries being sent to that page. Yeah, yeah. But that's something where there are lots of different aspects that kind of flow in. So that's something where I think as an SEO, it's useful on the one hand to have the technical understanding of what things happen, but also to kind of have that more marketing quality understanding of how do users react to change, what are kind of long-term changes that we could affect with users, how can we drive that to make sure that our site is seen as the most relevant site in situations where people are searching for something related to it. And that's, I think that's something that's really hard to test. Even in traditional marketing where they've had years and years of practice, it's really hard to test. Is this actually having a big impact or not? It's something where they end up looking at the bigger picture, or do you use their sites, which you can do as an SEO as well. Thank you. Sorry. I'm going to add a direct question. So a number of our sites are utilizing Cloudflare. And we notice that they directly block the Googlebot. But it hasn't had much impact on our rankings, on our visibility, et cetera. So kind of trying to figure out how like, are you guys using another bot to crawl an index outside of the Googlebot directly? And how should we be thinking about that when CBNs are going out of their way to block the bot? I don't know how that's set up with Cloudflare at the moment, but I know in the past they used to block people who are faking Googlebot. So if you use like your own, I don't know, screaming frog or something, you say, I'm using Googlebot user agent, then they would block that because they could tell like, this is not a legitimate Googlebot, we can block that. But for the most part, I think they have enough practice to recognize normal Googlebots and to let that crawl on. Yeah, it was interesting because I reached out to a bunch of colleagues at other agencies and they were replicating similar, such as even on their own site, like there was a support ticket within Cloudflare. And that was also being blocked when I was just trying to render it directly from Googlebot or Googlebot smartphone. Okay, yeah, well, we don't have any work around. Like if websites are blocking us, we're kind of stuck. But usually if a service like Cloudflare were blocking us by default, then that would affect a lot of websites. And we would notice that we would probably reach out to Cloudflare about something like that. It might be that maybe they have different service tiers where it's like, if you're at the lower tier, then it's like a free service, but we have a limit with the amount of traffic. I don't know if they have something like that. That's something I've seen with some other hosers where if you have kind of the free default hosting set up, then sometimes they just have a traffic cap. You might not immediately see that in the stats on how you're ranking and stuff because basically if we have content from you and basically the website is not, depending on what blocked in this case means, because I haven't seen that and I run a few websites behind Cloudflare and I have not had any problems. Then again, I don't have a gigantic website or like huge amounts of traffic. So because I'm on the free plan as well. But if we don't get like an error on our site, it might just be that we are keeping the content that we've seen the last time and that just ranks well and that's okay. Yeah, that's what I assumed. But I kept replicating it with some of these and I was like, okay, this can't be me. We can't, it's something that I'm definitely gonna look into. So John, what does the webmaster do when stuff, like for instance, if he keeps on, if he goes back to his search console dashboard and he keeps on fetch and rendering, what happens there? He just makes sure, okay, you know, there's no blocking, blah, blah, blah. Is there a limit, can you get penalized if you keep on fetch and rendering? If this is happening, like if you got blocked, like if you think that Cloudflare blocked you or anything? You don't get a penalty for just using the tools. Like there are some limits with the tools, but it's basically just that you can't use the tools if you keep using them all the time. But it's not that your website would run. Okay, because the person called me is like, I keep on fetch and rendering and now this capture thing is there and I can't get out of this capture. And it's like I said, you're spamming Google, stop spamming Google and I was just being facetious with him and that's it, you know. But so there's no limit. Like just go. It doesn't affect your website. So I think what would happen in a case like yours is we would slow down in crawling and we would try to keep the content that we can fetch more index and we would just reach longer. But that also means if you make changes on your website, it will take longer for us to pick that up. If we have to recrawl things significantly, like you add an AMP or you add some structure data across the whole site, that's all gonna take a while. So if you really regularly see that we can't crawl, with the normal Googlebot, then that's something I shake up with the host or we can see here. I was just wondering, when Google sends an email, how come it doesn't say this is from Google? It's like an empty subject line. I don't know. Is it search console? Yeah, no, if for instance, if you have an issue and you corrected the issue and that you go ahead and send, like you're waiting for the progress to happen and then you get an email back saying the AMP issue is in progress, Google will email you back. So when Google emails back and says, first of all, when Google emails, it's an empty subject line. Like there's nothing in the subject line from and then you open it up and then it says, okay, successfully solve the issue on AMP. But yeah, my emails from Google, not just me, but like in general, it doesn't say it's from Google. Okay, we should be able to fix that. Okay, that's a good point. Yeah. Thank you, Baruch, for coming onto this hangout. Man, more work. Sorry, sorry. All right. Hi, John. Let me just take one from Marie. Okay, great. So I have a question about the disavow tool. So we get people all the time who want us to do link audits and ever since Penguin 4.0, so September of 2016, where Gary Eich has said, and I think you said as well, like Google's pretty good at ignoring unnatural links. So my thought at that time was, well, we shouldn't have to use the disavow tool to ask Google to ignore links that they're already ignoring, unless you had a manual action for unnatural links. So we've been only recommending it for sites that have been actively, building links, trying to manipulate wrong things, things that are unnatural links. But I think there's so much confusion amongst webmasters, because I see people all the time, charging tons of money to disavow links that are just, I know they're being ignored. So I would love if we could have just a little bit more clarification. So maybe if I can give you an example, if there was a business owner who a few years ago hired an SEO company, and that SEO company did a bunch of guest posting just for links and stuff that was kind of medium quality, if you know what I mean, not ultra spammy, can we be confident that Google's ignoring those links, or should we be going in and disavowing? I think that's a good question. So from my point of view, what I would look at there is on the one hand, definitely the case is where there is a manual action, but also the case is where you, as someone who's seen a lot of manual actions would say, well, if the web spam team looked at this now, they would give you a package. Kind of the case is where you'd say, well, the manual action is more a matter of time and not kind of like it's based on something that was done, I don't know, where it's clearly done a couple of years ago and it was kind of borderline, not awesome. That's the kind of stuff I'd say is no problem that we would be able to do more anyway, but the kind of stuff where you look at it and say, if someone from the web spam team kind of got this as a tip, they would take a manual action, and that's definitely the kind of thing where I would clean that up and do like a disavowing. So something that was done a few years ago, you would probably, I know, I know you can't talk, like additional specifics there. I think it's hard to say if there's like a specific timeline, but in general, if the web spam team looked at this and said, like, things have moved on, like this was clearly done a couple of years ago, it was not totally malicious, then they probably wouldn't take manual action. And I'm assuming you probably can't answer this, but is there any way that, like, so to say we didn't get a manual action or they didn't get a manual action, can those links hurt them algorithmically? Because we feel like we're seeing some improvements in some sites, you know, after disavowing. And again, like I know it's always, it's never black and white. That can definitely be the case. So it's something where our algorithms, when they look at it and they see, oh, there are a bunch of really bad links here, then maybe they'll be a bit more cautious in regards to links in general from the website. So if you clean that up, then the algorithms look at it and say, oh, there's kind of, it's okay. It's not bad. It's still good to basically disavow just to prevent a manual action, correct? I think if you're in a case where it's really clear that the website team would give you a manual action based on the current situation, then that's what I would disavow. So it's good to think like a Google's, like someone in the Google spam team, just think like, you know, if they look at this, what would they do if they do, okay. No, the problem is that most people don't know. I mean, the average business owner doesn't know which links the web spam team would, you know, and I mean, there are guidelines, but it's very, you know, it's hard to interpret those. So, so I think, I mean, I have a couple of concerns, but my main concern is there's people spending tons of money on link audits that I think are not worth it. On the other hand, we may be not doing link audits and disavowing for some sites that could benefit from it. So I'd love to, you know, I think what you've said has helped a lot, so that we'll, you know, that's good. Now, I think for the vast majority of sites that kind of have that normal mix of things where it's like, you followed some bad advice in the past and it's like, you moved on and things are pretty natural now, then they really don't happen. That's kind of the goal with all of this. And that's why the disavow tool isn't like a main feature in Search Console. You kind of have to look for it that explicitly. That's all done on purpose because for most sites, you really don't need to focus on links that much. What I do kind of like about the disavow tool though is that if you're worried about this, you can still go there and be like, okay, I know there's like these handful of things that we did a couple of years ago and I'm really worried about that, then disavowing them from my point of view is not a problem. I wouldn't go out and explicitly search for all of that stuff, but if you know about it and you're really worried about it, then you can kind of take care of it. Awesome, thank you. Cool, out there. Hi, John. Hi. I have a question about one of our client website. So they have a club on, they have a club in various city in New South Wales and each club has a subdomain on their website. Now, when they add any page to their website, they create the page for each subdomain. So recently they have added a page which is about their club activities and they have added this page to their all subdomains. So it means that all of the subdomain have same content except the title of the page is different because when they add to the for Sydney, they add their location name in the title tag when they add it for Newcastle. They add Newcastle in the title tag but the rest of the content on the page are same. So will it be a problem because they have 50 subdomain and they have created 50 page which have same content except the title. That sounds like something that's a bit inefficient, I guess. So, I mean, you're already bringing it up and kind of like saying this, this seems like something that could be done differently. I think if you're in the case where you have 50 subdomains that all have the same content and you're just changing the title tag, then you're probably not providing us a lot of really useful content. So that's the situation where I would say it makes more sense to combine things and make really strong pages rather than to dilute things across even more subdomains. A good thing is that the first part would be Wikipedia. Just see the amazing stuff of really how well they do and, Yeah, I think copying Wikipedia is always hard too because they're kind of a special situation. No, I'm not saying it's just a grammar point from just from everywhere, like every, there's a reason why most of it, like you guys rank it number one, right? Like a lot of their pages. What about creating one page and then use canonical URL to other location pages? How do you mean? I mean, I just want to create one page which we'll talk about their activities. And I will use the link as a canonical URL to other location pages. Is that right? Pages? Draw another location page, yeah. I think that could make sense because then you're combining things again. Then you're making one strong page rather than a bunch of diluted pages. Because what happens when someone visit the website and they choose their location, they automatically redirect that person to the subdomain for which they created for their certain location. So they need the page on that subdomain. So I think that is why if we create the one page and add the canonical URL, that is the only option we have at this moment. You can also use the Google API as well. You got to think pay like $20 or something and then you can use that as well to add the Google Maps as well into there, like a location. Oh, like a Maps listing. Yeah, I think you guys started charging, I think it was like $20 or something right now. I have no idea about Maps. The Google Maps API, yeah. Okay, but yeah, I think if you have separate pages that you need to have them for kind of technical reasons on the site and you use the canonical, that's a good approach. Okay, thank you. Like a business that have multiple franchises in different locations that would essentially have the same content for each franchise but be in a different city or township or whatever and kind of funnel that from your point of view back to a single page. I think that's always a bit tricky because you're balancing people looking for that kind of business in a specific location with kind of informational page around that business directly. So that's something where sometimes it makes sense to have that separate per business, sometimes it makes sense to have kind of the general information in a central place and to have just like location landing pages that are more focused on the address, opening hours, those kind of things. Thank you. Sure, should we maybe take some more questions from here? Lily, I think you had a bunch. I have a related question as to the canonical point that you were making. This is a question that my team and I have had for a number of years and we still don't exactly know the solution. So if you're handling a big e-commerce site with many, many products in many, many categories let's say you're on a category page that has a lot of different filters and facets and things of that nature that slightly changed the page content but maybe not enough to justify having its own URL but maybe in some cases with certain filters it does justify having its own URL. So how do you handle crawling in that situation? Like does a canonical tag work as a blanket solution to just create one page that's indexed or should you look at no indexing certain facets and filters or using robots or how do you kind of control that for large e-commerce sites? That's tricky. I don't think we have really clear guidance on that at the moment. So that's something where all of those different kind of methods can make sense. In general, I try to avoid using robots text for that because what can happen is we find the URLs we just don't know what is there. So unless you're really seeing problems that were causing too much load on the server I try to use things like the noindex, use the rel canonical, maybe use rel nofollow with internal links to kind of funnel things to make it a little bit clearer on what we should crawl an index rather than using robots text. But kind of the decision on when to combine things into an indexable page and when to block it from indexing, when to kind of softly guide it towards like one canonical URL that's really tricky sometimes. Sometimes the canonicals get ignored if the page content is too different. Exactly. If the content is different then we might say, well, these are different pages, we shouldn't use the canonical. Whereas you might say, well, this is really something I don't want to have indexed maybe in noindex would make more sense than a canonical. You can also combine both of them. We don't recommend combining them because it's really tricky for us to say, well, what do you mean? Are you saying these are the same? But one is indexable and one is not indexable then they're not the same. But it's something where I imagine over the course of the year we'll have some clear guidance on what could work there. But especially with really large e-commerce sites, the crawling side can be quite challenging. Yeah, I don't know. So there's a scenario that I'm trying to figure out with a couple of my customers lately. We're trying to figure out why we're not able to jump aside that's still using HTTP and more or less looks abandoned because the pages hasn't been updated in a while and the content's old outdated and generally kind of thin. And so I've got a couple of theories about it. Part of it is I think maybe it's been in the index so long that y'all kind of have a trust factor built up with them and it's kind of hard to unseat them. That's part of my theory on that. So I'm just trying to figure out what's going on because I know HTTPS is a factor. I don't know how much of a factor it can be, but I also think the age might be part of the problem of trying to provide that newer, fresher content that's in most cases what we've done over the last years a lot more thorough than what was written, say 10, 12 years ago. So we're trying to figure out why is it taking so long to essentially move ahead of those pages in a lot of cases. So HTTPS is a ranking factor for us, but it's really kind of a soft ranking factor. It's really small. One of the things I've noticed about when I encounter sites we're still using HTTP is effectively they have been updated in general in two or three years usually. So to me that's kind of like they've almost been abandoned to me, so to me I'm looking at it as a signal of freshness and stuff like that. Yeah, I mean, freshness is always an interesting one because it's something that we don't always use because sometimes it makes sense to show people content that has been established. If they're looking at kind of long-term research, then like some of this stuff just hasn't changed for 10 years. I can give you a pragmatic example since I'm a web developer. I see pages that were written say in 2006 or 2007 haven't actually been changed, but the web standards, web specifications or just the general way of handling those things has evolved, but that page is still written as if it's 2006. And you know, I've got something that's fresher, you know, that's more in depth than things like that. Like number 11 and there's a number four, for example. I'm like, why are they still up there, you know? Yeah, it's hard to say without looking at the specific cases. Yeah, exactly. But it can really be the case that sometimes we just have content that looks to us like it's really relevant. And sometimes this content is relevant for a longer time. I think it's tricky when things have actually moved on and these pages just have built up so much kind of trust and links and all of the kind of other signals over the years where like, well, it seems like a good reference page, but we don't realize that actually other pages have kind of moved on and become kind of more relevant. So I think long-term we would probably pick that up, but it might take a while. Okay, I don't feel like I had to do with this long-term trust of that particular page. I don't know if we call it trust or anything crazy like that. It feels more like we just have so many signals associated with these pages and it's not that if they were to change they would disappear from rankings, it's more, well, they've been around, they're not doing things clearly wrong for a long time and people are maybe still referring to them, still linking to them, maybe they're kind of misled and kind of linking to them because they don't realize that actually the web has moved on or maybe, I don't know, a new PHP version came out and the old content isn't as relevant anymore, but everyone is still linking to, I don't know, version three or whatever. This is W3 schools as well. Yeah, you know what I'm talking about with that. I've also seen that kind of in the health and fitness space as well. Like workout types were more popular 10 years ago but the particular approach to it isn't necessarily as popular now or been kind of proven to not necessarily be as good. It's just some other general observations I've made too. I think it's always tricky because we do try to find the balance between kind of showing evergreen content that's been around and kind of being seen more as reference content and kind of the fresher content and especially when we can tell that people are looking for the fresher content, they won't try to shift that as well. So it's not, it's something that would always be the same. Are you ever going to rewrite the Amit Segal 200 questions? Can, is that going to be rewritten? 200 questions. Oh my gosh. Or whatever it was, take a look at this webpage and really think closely. Are you ever going to rewrite it for 2019? I don't know. It seems like a good idea. Yeah. There's a lot of things. It's something that we refer to a lot with regards to quality. So maybe that's something that we could do at some point but a lot of these things don't really change. They're pretty relevant. Still we use them in our reports. They're a good indication of whether your website is helping people in high quality so I wouldn't rewrite them. Okay, that's good. That's good. See, we shouldn't update stuff. All right, let me double-check what kind of questions we got submitted. We have a large e-commerce site that's not in the mobile first index yet. We know we serve different HTML for the same URL depending on the user agent. Could this harm us, I think, essentially. So you don't have a ranking bonus for being in the mobile first index so it's not that you need to be in there but it's more a matter of when we can tell that a site is ready for the mobile first index and we'll try to shift it over. And at the moment it's not at the stage where we'd say we're like flagging sites with problems and telling them to fix things but more where we're just trying to get up to the current status and say, okay, we've moved all of the sites over that we think are ready for mobile first indexing and kind of as a next step, we'll try to figure out the problems that people are still having and let them know about these issues so that they can resolve them for mobile first indexing. So it's not that there's any kind of mobile first indexing bonus that's out there. It's more that we're step-by-step trying to figure out what the actual good criteria should be. Given that the search quality guidelines are an indication of where Google wants its algorithm to go, how does the current algorithm handle measuring the expertise and credibility of publishers? I don't know. Yeah, come on. I think that's probably hard to kind of figure out algorithmically directly. And if there were any kind of technical things that you should do that we would like to come. So if there are things like authorship markup that we had at some point that we think would be useful for something like this, we would definitely bring that out there. But a lot of things are really more kind of soft quality factors that we try to figure out. And it's not something technical that you're either doing or not doing. It's more like we're trying to figure it out how a user might look at this way. So not anything specific that I could point at. Is it reasonable to assume that if something's in the quality raters guidelines that Google, I mean, that's what Ben Gomes said, right? That's where the Google wants the algorithm to go. So, I mean, we may be guilty of putting too much emphasis on the quality raters guidelines, but it's all good stuff in there, right? So is it reasonable to make that assumption? Like if it's in there, we should aim for that sort of standard of quality. I think in general, it's probably good practice to aim for that. I avoid trying to focus too much on what Google might use as an algorithmic factor and look at it more as we think this is good for the web. And therefore we will try to kind of go in that direction and do these kind of things. So not so much like I'm making good website just so that I can rank better, but I'm making good website because when I do show up in search, I want people to have a good experience and then I'll come back to my website and maybe they'll buy something. So that's kind of the direction I would see that. Not as like do this in order to rank but do this in order to kind of have a healthy long-term relationship on the web. Next slide. Let's see. A comment from Martin. Is there a particular type of schema that is more likely to obtain featured snippets or voice search results? I don't know. I can't think of anything off there. So there is the speakable markup that you can use which is probably reasonable to kind of look into to see where it could make sense on a page. I don't think we use that in all locations yet. So I don't know. Is that the goal? To use it in more locations would make sense. I guess. I mean, it's always a bit tricky because sometimes we try them out in one location and we try to refine it over time. And usually that means we roll it out in the US where we can kind of process the feedback fairly quickly. We can look to see how it works, how sites start implementing it or not. And based on that, we can refine things and say, okay, we're doing this in other countries and other languages and taking it from there. But it's not always a case that that happens. Sometimes it happens that we keep it in the US for a couple of years and then we just say, oh, actually, this didn't pan out the way that we wanted it. So we'll try something new. We'll give it up. Yeah. But a lot of these structured data types, we do try to roll out in other countries, other languages. I imagine the speakable markup is tricky with regards to the language. So that's something more where we'd say, well, Google Assistant isn't available in these languages. So why do we care what markup is actually used? I don't know how many places Assistant is available yet. Maybe that's everywhere now. But featured snippets in particular, I don't think we have any type of markup specific to that. So that's something where if you have clear kind of structure on the page, that helps us a lot. If we can recognize like tables on a page, then we can pull that out a lot easier. Whereas if you use fancy HTML and CSS to make it look like a table, but it's not actually a table, then that's a lot harder for us to pull out. John, do internal links help with featured snippets? Could we have an anchor? Sorry, not an internal link, an anchor link. Do you think that that would help? I do know we sometimes show those anchor links in search, kind of as a sub site link type thing. But I don't know if that would work for featured snippets. Does cross-domain site map submissions still work when 301 redirecting to an external site map file URL? Hopefully. What about using meta-refresh? This was something that was recommended by a video hosting company. People said, we'll host the site map on our site. But you're the XML file, meta-refresh over to our site. Where all the links are located. I don't think that would work. So site map files are XML files and we process those kind of directly. So if you do something that's more like a JavaScript redirect or that uses JavaScript to get us a site map content, then that wouldn't work. It would really need to be a server-side redirect. What you can also do is use a robots.txt file to specify a site map file on a different host that also confirms to us that actually you told us specifically to use a site map file from there. So I'd probably use something like that more than any kind of redirect. I imagine a 301 server-side redirect would work. But you know, you should be able to see some of that in Search Console too. Like if we're picking the site map up and in the index composition tool, you can pick the site map file, then that's a pretty clear sign that we can process them. Okay, can you return instructions on how I can get pre-notified of hangout sessions? Okay, I post them on Google Plus where we have millions and millions of followers. So you should join us on Google Plus. But we'll need to figure something out when Google Plus goes away. I don't know what the best approach is here. We also put them on the Google Webmasters page with I think it's a shared calendar. So you can even subscribe to that calendar. But that lists all kinds of events. So that could be conferences or hangouts in different languages as well. What's the 200 Questions document that was referenced that it's, I think it's more like 20 questions. 23 questions. I don't know, when was that, like 2011, 2012? I think, yeah, it was 12 or 13. It came out after Panda. Okay. But if you just search for 23 questions, a meet single. I just wanted to make things interesting for you. Okay. It'll be, it's not always the first result for me. It's funny, there's usually a search engine land article that's first. And then the second one is, yeah. Oh, who's that? Barry's sending all these links, you know. Oh my gosh. Yeah. Well, he's really fast. Okay. Maybe we can take a question from you all. If anyone else has any questions here. Okay. Joan, can I bring in my question? Sure. It was about travel agencies websites for traveling. We choose internal search to show dynamic content, like the top 10 most cheap hotels in the search city. Okay. So the page frame loads in an instant, but the top 10 cheapest hotel results load dynamically in like 30 seconds since the search has been performed because the website has to perform this search in the back and then compares and refine the results in order to list for the searcher the cheap top 10 hotels. So it takes a little time to list them. So right now, Googlebot sees only the background of the page than those 10 empty placeholders where the results will load a little later after the internal search has been performed. So since this is a trend for travel websites to bring information as fresh as possible and as accurate as possible, I'm thinking what Google is doing about this. Of course we can list some static content on these pages like all other websites are doing nowadays for Google, if I may say so, but that kind of defeats the purpose of about what most users wants to see now, fresh and cheap. Yeah, so you posted a link to a screenshot from I think the Fetch as Google tool or the mobile test. Fetch as Google, yeah. So if it doesn't load there, then we can't index it. But usually that's more a matter of us not being able to process the JavaScript or maybe being blocked from actually accessing the content there. So it's something that you can do in a way that would work. It's not by design that it would never work. So that's something where you can dig into the details using things like the mobile friendly test to see if there are JavaScript errors involved, if things are blocked and kind of refining from there. So it's not impossible, but it takes a bit of work. John, I've dig into that. We've made sure that nothing is blocked from Google. The only thing we want Google to do is to wait a little bit for the dynamic content to load onto the pages. This is the next step, if I may say so. Because while this page is not like an endless scroll, let's say Facebook like, it's a limited 10 results page. So it's limited. The thing is Google should be waiting a little bit for the dynamic content. I'm only giving you an example, but I'm sure there are a lot of other examples out there in the wild. And because since this is the trend for people to see dynamic content because they somehow sort things and the time is less and less they spend, the people spend less and less time on the websites and they want to find as fast as possible the perfect results, if I may say so. I was wondering if you guys are looking toward this enhancement. So we do wait a bit for it. But if people are impatient, then that's a sign that you should be faster anyway. So that's something where I would look into that anyway. But I think looking at the screenshot, like all of the items there were blocked and just like gray blocks. So that feels like something that's more of a technical issue rather than just a timeout problem. Yeah, I was about to say like we do see a lot of dynamic content that gets indexed without issues, even if it uses like JavaScript and stuff. So if we are timing out, then you might have an issue in terms of how long the searches take and that might actually reflect elsewhere as well. And yeah, we wait quite a while for content to finish. Can you give me a timeframe? How much do you wait? That's really, really tricky because basically, so the thing is, the reason why we can't give you a timeframe is because time, this is gonna sound really weird and bear with me for a second. Time in Googlebot's rendering is special and doesn't necessarily follow Einstein's principles. So I can't really say much. What I can say is if the network is busy and the network is the bottleneck, we're probably waiting, but we only wait for so long. So if you take like a minute or 30 seconds, then we're probably timing out in between, but there's no hard, if I tell you 10 seconds, that might or might not work. If I tell you 30 seconds, that might or might not work. So I'd rather not say in number, what I would say, try to like get it in as quickly as you can. And if you cannot get it quick, then try like something like caching the search results so that the search becomes more or less in, or the producing the results on the page becomes more or more, more or less instant or try dynamic rendering on your side that might be a workaround for this. What you can also try is you can try to like put it on the server side and basically try to generate as much content as possible in the first pass. That's something that also benefits your users, especially if they're on slow networks. So yeah, sorry, I don't have any simple answer, but Einstein in Googlebot is funky. Yeah, well, we don't look for simple answers because we've dig into this for a long time, but we do search for an optimal way of showing the users what they need because we are developing this website after we gathered feedback from the users for like 10 years. So we do, we are thoroughly in this, but we also want to be able to get the users to us through the help of Google. So we try to have a blend here, you know. Okay, but I do understood what you say and thank you very much. John, are you looking forward for 5G? Are you guys looking forward to it? Oh yeah. 5G. Who doesn't? I don't know. I have 16 gigabytes in my phone. So the seconds will not make an impact anymore as things develop, right? So it doesn't eventually... I think it probably depends on what the website actually does. One of the things that's tricky with the speed in rendering is we can cache a lot of stuff that's sent from a server more than it would be in a browser because we can use our index for a lot of these things. So sometimes if JavaScript is cached on our side, we don't have to fetch it. And then if you compare the times again, then it won't match what a user sees. It won't match what you see in web page tests at Orrd. So it's really, really kind of tricky. And for the parts where we do know it takes longer, we'll be a bit more patient. But it makes it tricky to test. That's why we have all of these testing tools now that show as many errors as possible to make it possible to figure out, like, does it not work at all? Does it sometimes work? And where are sometimes issues that come up? Let the revolution begin. We'll see. For very large websites, does the order of URLs in the XML sitemaps matter? No. We don't care. It's an XML file. We pull in all of the data. We process it all at once. Then what about the priority parameter in sitemaps? We don't use that at all. So that's something that, in the beginning, we thought, oh, this might be useful to figure out how often we should crawl pages. Everything's high priority. But it turns out, if you ask webmasters, they're like everything is priority one. It's most important. And similar also with the, I think, change frequency in sitemaps, where we also notice that people tell us things are changing all the time, but it was last updated last year. So if you have the change frequency and the date, then we get that information from the date anyway. So we ignore the change frequency. Let's see. Should corporate schema be added to just the home page, contact page, or all pages? As far as I know, it's just the home page. I don't know. It's supposed to just be one page, usually, with organizational and corporate. It's generally the recommendation. OK. Well, if you're a news website, then it's not really news, but OK. I guess it doesn't matter as much which page. It's just like, do not put it on every page that you've got. I guess it's the more important bit. I think it depends on if you're on the news side, it probably makes sense to put it in the contact page or the about page or something, whereas in a shop or restaurant website, it's probably OK to put it on the home page. I think also in this case, it doesn't matter for us as much because we need to be able to find it on somewhere like the home page or contact page. But if we have it elsewhere, it doesn't change anything. So the big thing to kind of not compare to is the review markup, where we sometimes see people put company review on all pages of the website with the hope of getting the stars, and the search results for every page on their site, and that would be bad for us. But the contact information, if you have that marked up, that's fine, I don't see a problem with that. Oh, I'm looking at that one. OK. So question in the chat. The Google website speed test we've been using has been recording very slow page load times. But the independent tests we did with colleagues overseas displayed a very quick page load time. Does this false recording Google measures affect how our site is ranked in Google algorithm? What are they determining as a slow time on their own? It's not my question, but to give some context, the new Lighthouse data for page speed is way more harsh than page speed insights used to be. So something that had a score of 80 on page speed insights might be a red 29 on Lighthouse. So that's a good question. Is that likely to cause, because we know that in mobile, very slow sites could be demoted. So is it good if we say, if you're in the red on the Lighthouse test, that we should really be improving things because it could cause a demotion? Or is there a cutoff? So we don't have a one-to-one mapping of the external tools and what we use for search. So that's, I think, it's pretty hard to say. Yeah, that makes it really hard to say. But in search, we try to use a mix of actual, what is it, lab test data, which is kind of like the Lighthouse test, and the Chrome UX report data, where essentially what we're measuring is what users of the website will be seeing. No, I would call right. The Chrome UX data is only available for really large sites. Like most of the smaller sites that we assess. Then we don't really have that much data for that, yeah. OK, yeah. I would say really large sites. You may actually get a lot of traffic. That's what it means for heavily trafficked sites, yeah. So when I see that a lot of times, what I see from that I encounter a lot is, generally, they're measuring time to first byte, especially Lighthouse, when we're talking about Newt Harsher, I think y'all are more interested in first-contentful pain and first-time interaction, which is after that. And that can be where the discrepancy really is. Because a lot of developers, I know, say my page loads in two seconds, and I measure it, and it's 25. And they're really upset with me because I'm trying to show them what really counts. I think a lot of times when I'm listening to SEO people too, they're also looking at something kind of time to first byte kind of measurement. It's also important to see that Lighthouse, for instance, specifically measures for 3G connection on a median end or like a medium performance phone. Yeah. So basically, if you're on a recent Apple Macintosh or a recent fast Windows computer with a really good wired connection or a really good Wi-Fi connection in your office, of course, you're seeing a low time of two seconds, but a real user with their phone out in the wild is probably not seeing that. So it's one of these cases like it never hurts to make your website faster. But it's really, really hard to say how it would look like from the inside's perspective as we're using very specific metrics that are not necessarily mapping one to one to what the tools are exposing. But Ilya said that it's important to fix it in one of his. Well, yeah, of course, it's important to fix that because you don't want your users to wait on your website forever, that's going to hurt you, that's going to hurt your users, that's going to hurt you in search. But I would say just look at the tools. If the tools are telling you you're doing well, then you shouldn't worry too much about it. If the tools are telling you you're doing really not good, then I think the time spent on figuring out why it says if what it says is relevant is wasted, you should rather look into making the site faster. So CNN, for instance, there are two. So they should worry that their mobiles are two. Because the thing was before, don't obsess about the old-school page speed insight. It was like, don't obsess about the number, you know. Right, yeah. The quoting doesn't mean anything to us. It's just all about the user experience and so on. But now, we should worry about the score. So I wouldn't say worry about the score. OK. For the users, I'm saying, take the warning seriously now, right? Yes. OK. You take it as mobile performance is a very important factor for your users as well as everything else. So I would say make sure that your website performs well in real-world conditions. Maybe even get a cheap phone and try the website every now and then. And that's something that I like to do and I used to do before I joined Google with the development team that I was working with. I was like, look, do you want to use this website on this phone? It's like, oh, this is horrible. Yeah. So maybe we should do something about it. So I still let you use like a 56K modem. Yeah, back, you know, and so you can see really like if you want to go really slow, you know. You don't even have to use a modem. Like in Chrome, you can set it up and try different connection speeds. Yeah, yeah. You know, emulated. Or simulated, just realize that. That's, I think that's really good things to look at. And also, look at your user base. Look at your analytics data if you're seeing that people are only using your website with like a high-end iPhone. Then maybe it's less of a problem than if you're seeing people are connecting to your site from random rural connections, which are slow and they have low-end devices. Then maybe that's more. What do you do in unique cases? If most of your users are still coming from, I don't want to say the word laptop, because it's like the whole EMF thing going on. But like if the users are using, they're all coming from desktop, like should I just ignore, you should not ignore mobile? Like if you really have done your research, where what are you doing in unique cases? I think it's worth looking into whether they're coming from these devices because they don't work on phones, which is something that, for example, with Search Console, we had this constant discussions with them, where they're like, well, nobody's using a phone to use Search Console. I was like, well, have you tried using a phone? And now with the technology on phones, like well, look at all these people that are using a phone. Because they're a product that's really, really insane. And they want to see it on a desktop, like a 32-inch or whatever. Yeah, I don't know. I mean, there are interfaces available where they can roll out tens of servers from your phone. So if you can roll out websites from your phone, then there's a lot of stuff you can do from your phone. Yeah. Yeah, I should be interested in that. I think what's out of time, so in the interest of kind of keeping this recording relatively reasonably long, I think we'll take a break here. It's been great having you all here. Thank you for taking the time to travel over and make it here. Thank you for joining in. Thanks for submitting all the questions. We didn't get to a lot of questions, but I'm sure we'll have time more of the future hangouts. And there's always the webmaster forum. I highly recommend that. So definitely. Thank you, guys, for joining together with us and for making such a great team over there. Have a nice evening. Have a nice evening. Thanks for joining. Bye, everyone. Bye, bye.