 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these hour hangouts where master publishers can open and ask us questions around their websites and web search, anything like that. A bunch of questions were submitted on YouTube already. So we can go through some of those. But if any of you have anything on your mind that you'd like to talk about beforehand, feel free to jump in. Hi, John. Hi. I would like to ask some questions. So one of our clients, they are a non-profitable organization. So they arrange a lot of events a lot of times. So when they arrange an event, they publish it on their blog post, on their blog about the event. But when the event is finished, they take off the page from their blog. So in this case, do you need to create any 301 re-reaction for those event page? I think the part that's kind of where they'd be missing something where it would be useful for their website in general is if there were some kind of persistent landing page for these kind of events. So for example, if they do a monthly event, then having one page for that monthly event, just kind of as a placeholder, we do this event every month and some information about it so that people can link to some persistent place and that page can gain value over time. Whereas if you have just individual event pages for each date and those turn into 404, then that's something where all of the links, all of the value that they collect for those pages, they kind of disappears again. So something kind of like a persistent page that says our next event is then. And if you want to keep an archive of the old events, maybe have a separate archive section, but kind of have one persistent place for the events. OK, do you suggest the same thing for the offer or a special promotion thing? Do you suggest the same thing? I mean, it depends on how you want to have those found. So that's something where if you do this really as a complete one-off thing where you don't care if people, I don't know, link to it over time because it's like such a one-off thing, then that's something that you might just want to keep and remove. But if it's something that happens regularly, then having a persistent page is fine. So if you have something like, I don't know, Black Friday offers and you have a page that you always use for Black Friday, then that's something that might make sense. Whereas if you have just one page that says Black Friday 2017, Black Friday 2018, then all of those pages will kind of have to stand on their own and it'll be hard for a new page to gain value out there. The last question. We have a client. They build garden sheds, and at the same time, they build playgrounds. So they have two types of products. So what they have decided recently, they build a new website for the playground product. And they want to move all the product from the website to the new playground website. So they want to redirect all the playground product to the new domain. Now the question is, if the rank they have for the playground will take keywords in the old domain, will it transfer to the new domain with this 301 redirection, or will it take time? If they're creating a new website that is a mix of multiple other websites, then that's something that is not the same as just moving from one domain to another. So if, for example, they have all of their content on one domain, they just want to move to another one, that's something where a site move, a setup with a redirect works really well. But on the other hand, if they have playgrounds and garden furniture, and they want to create one new website that is a mix of both, or they want to take one mixed website and split it into two separate websites, then that's something where we essentially have to reevaluate the final state. We can't just say, this plus this equals the new one, we have to figure out what the right balance is. Thank you, John. Sure. Any other questions before we jump in with the rest? That's fine, too. I mean, you're welcome to jump in in between if anything pops up, or if there's anything that I can help clarify along the way. Let's see, what do we have? Our news article template features two bylines, one at the top below the title, which includes only the author name, and another one on the bottom, which includes more detailed information about the author. Is it OK to have two bylines on the page? Does it create any confusion to the crawler in understanding who the author is? So from our point of view, it's fine to have these bylines on the page. It's not something where we're explicitly looking for that. We don't use the authorship markup or anything specific like that. It's essentially just a way for you to tell your users what this page, kind of the background of that page. If you're just saying, this is written by these authors, then that's a good place to put it. Some people put it on the top. Some people put it at the bottom. Some people have both. I think that's totally up to you. What's the best place for the schema markup with item prop author? Shall we add it to both bylines or just one? So from my point of view, where you add it is essentially up to you. What I would not recommend doing is adding it in two places on the same page, because it just makes it a lot harder to maintain properly and to understand what it is that you're trying to tell with the markup. So from that point of view, I would pick one place and use the markup there and not just put it in two places. Or if you use JSON-LD for the structured data, then you could put that somewhere in the head. For example, it doesn't need to be exactly in the same place where the name, for example, would be. We're an online tour operator and have implemented aggregate rating for offered accommodations via schema.org slash hotels and our offer destinations via schema.org slash web page. And I think it goes into the change that we recently had with regards to the review rich snippets that we would show in search, where we essentially made three changes. On the one hand, we limited the review rich snippets to a certain set of types. And on the other hand, we made it so that if you're reviewing your own entity or your own business, then you can't host that on your own site. Or rather, if you hosted on your own site, we just wouldn't show that. So those are kind of the main changes that we did there. And I think what you're probably running into is you used to review, essentially, the web page. And that's not something that we support anymore. So from that point of view, that setup that you have where users can review the page is not something that, from our point of view, makes a lot of sense. We expect that the reviews are about a specific product or a specific thing that is actually being reviewed, something that anyone can look at and review in a comparable way. So for example, if you, I don't know, have a car that people are able to review, then everyone is able to buy that car and able to review exactly the same product. Whereas if you have something like the destination page that you mentioned there, I think it would be hard for multiple people to review the same region of, like, geographic region that someone could go to and review based on the same criteria. So probably that wouldn't make that much sense there. Of course, it's totally fine to show these kind of reviews on your pages if you think that it makes sense for users. It's just something that we wouldn't show in the search results. Our site got hacked. Oh, is there a question? Sorry? Hey, John. My question is in the Google indexing API. OK. Yeah, I recently started with the indexing API for one of my blog sites or job sites. But I'm facing the issue with the get metadata using whatever the API can. And I'm not sure as to what API key I'm supposed to use. During the app installation, I did create the service account over to client ID and all the three authorization pieces I've gone through. So but still that I'm not able to figure out what API key I'm supposed to use in that. And also regarding the batch, I'm going to process in the batch. Yeah. I would check in with one of the Google APIs help forums because that's something which is kind of hard to explain just live in a Hangout. It's almost helpful more to kind of look at the code that you're actually trying to use. So for the indexing, go ahead. Yeah, I did post in the forum, but there is no specific forum for the Google indexing API. So it's whatever, even if I search, it is redirecting the web as API there. And if I post in there, I didn't get it perfect when it was flagged as out of topic in there. So can you just any reference in exactly where you can post in there? Another idea might be to just post on Stack Overflow since this is kind of a generic Google API that seems like something that someone from Stack Overflow would also probably be able to help with. OK. John, hi. The same was here. In the last Hangout, I was asking about the Google for Jobs indexation. We are having problems here with our visibility. We have done a migration. And we sort of drop off in our, basically, our Google for Jobs traffic. Now I was wondering, you said to ping you the website, et cetera, I did that in the comments. I was wondering if you had any updates? I don't think I heard back from the team, but I passed it on to them to look at. But it might be good to check in with them again if you aren't seeing any changes. Usually, they're pretty good at picking these kind of things up, but if you're still seeing problems in that, it's not performing the way that you would expect. Then I'm happy to update that again. If you can just send me a quick note, then I won't forget. OK, I'll add this in the comments. Thank you. Thank you. Also, OK, our site got hacked. The hack got removed, but it left virtual pages indexing Google. How can I de-index those pages? So sorry to hear that your site got hacked. That's something that happens to lots of sites. So it is always a bit of a hassle. With regards to the hack got removed and some virtual pages are still indexing Google, the first thing that I would double check is that these pages are actually no longer existing on your website specifically for Googlebot. So what we sometimes see is that hackers create this kind of virtual pages or fake pages on a website that when you try to go to them manually, they don't work or they look like they don't work. And when you go there with Googlebot to look at the page itself, then they do show some content. So for a webmaster, it looks like it's completely cleaned up. And for Googlebot, we look at it and say, well, all of this content is still here. We'll continue to index it. So that's kind of the first step that I would do. Check with the testing tools to make sure that it's really, really removed. And then the second step with regards to getting these out of Google, there are essentially two approaches that you can take. One is if you have individual pages that are very visible in your normal search results, then that's something you could remove with the URL removal tools in Search Console. So with those, you can specify a subdirectory. If these are tied to a specific subdirectory, often they are, then you can specify a subdirectory and say all of the things in the subdirectory I don't want to have shown in Search. And then for a certain time, I think it's something like 180 days. We will filter all of those out from the search results. For individual URLs, if you can't isolate a subdirectory, then you can submit them there as well. It's just a very manual process. And there are some limits. I think it's something like 1,000 URLs a day that you can submit there. So because of those limits, I would focus on the URLs that are actually visible for your normal search results. So if you search for your company name and you find five hacked pages from your website, that's something definitely to clean up. On the other hand, if you search for specifically that hacked content where you search for your company name and then free download and whatever hack content they had on your pages, and then you find those hacked pages, then probably that's not as critical. Because normal users wouldn't be searching like that. They'd be searching for your company name or for your products. So from that point of view, usually there's a set of very visible pages that you can remove. And those are the ones that I would remove. For pretty much everything else, I would recommend just making sure that those pages return a normal 404 status code. That tells us this page no longer exists. And the next time we go off and try to crawl those pages, we'll see these are gone. And we can just drop them from the search results. So that means there's no extra work involved for you. It's kind of cleaned up automatically over time. It can take a bit of time. So often it'll take a couple of months, maybe half a year, for all of these pages to be updated and dropped out. So it's not something that happens right away. But again, unless you're explicitly looking for the hacked content, oftentimes you don't see those pages. So those are the two approaches I take. And definitely make sure, before you go down that path, that these pages really don't exist anymore for Google. And if they still exist for Google, then obviously there's still something on your site that's kind of affected from this old hack. How do crawlers deal with GDPR cookie bars? Obviously, no content may be loaded before a user has set an opt-in. We see many data which are not loaded in Search Console, and they seem blocked. Yes, that can be problematic. So in general, Googlebot does not click through on anything. So Googlebot will not go to an interstitial and say, I accept. Instead, Googlebot will try to crawl the page as it comes to Googlebot. And it'll try to render the page as it comes to Googlebot. So in particular, if you have a banner that is shown on part of the page, or if you have a banner that is shown on top of the actual HTML content, then that's something where we can still see the actual content. We can still index the actual content. So that's less of an issue. On the other hand, if no content at all is loaded and it's just this interstitial page, then we would only see the interstitial page and think, well, this is what you want to have indexed. So we'll try to index it for you. And that's probably not really what you want to have indexed. The other variation that I sometimes see is that you redirect to an interstitial URL. And then from there, when the user has accepted you redirect back, that's equally problematic because the content, on the one hand, is not there when we look at that interstitial URL. And oftentimes, the URL of the page itself is included in a cookie or some kind of a snippet that we can't pick up. And what happens then is we see all of the pages on your website redirect to one page. And we think, well, you're redirecting your whole website to one single page. And this single page has some legal information on it. So maybe you want to replace your whole website with this one page. And probably you don't want to do that. So from that point of view, I would recommend against doing this kind of redirect thing, instead showing a banner on top of the actual content so that users get the banner that you need to have shown for legal reasons, but that the actual content is still loaded behind so that search engines can actually process that. Of course, the other thing that you might be able to do is to think about which users actually need to have this interstitial shown. And depending on what it is that you need to do, it might be that you discover that perhaps users in the US don't need to have this interstitial shown. I don't know the guidelines in this specific case, but that's something that you can double check. Googlebot primarily crawls from the US. So if there's something that you don't need to show users in the US, then you don't need to show that to Googlebot either. So that might make it a little bit easier. In some legal cases, it can make it a little bit harder where if there are policy reasons or legal reasons why you can't show any of your content in the US, then obviously Googlebot can't see that content either. So that's kind of depending on which case applies to your specific website, there are different options there. The way to kind of test this is ideally to use the inspect URL tool where you can see how we would render a page. And you can also look at the HTML that we pick up and use for indexing. So if you check your pages and you see the interstitial is being shown in the rendered version, as long as within the HTML version the actual content is still shown, that's kind of OK, because then we can still pick up the actual content. On the other hand, if you look at the HTML version and there is no content at all other than the interstitial, then essentially we only have the interstitial content to index, and that's probably not what you want. Hey, John, I have a question. This is related to the image site map. We have posted our images in Amazon AWS. So the URL comes as the S3 packet. Then as a sub directory, we have our domain name. So can I just use these URLs in our image size map? Sure, sure, that's perfectly fine. Images don't need to be on the same domain. So if you're using a CDN like Amazon, if you're using any other kind of CDN, you're welcome to host your images there. The one thing I would caution about, though, is indexing images takes a lot longer than indexing web pages. So if there's noise, good to go. So the thing with images is images take a lot longer to be indexed and they take a lot longer for redirects to be processed. So if at any point in the future you decide to move to a different CDN setup, you would have to set up redirects from the old URLs to the new ones. And for images, processing those takes a lot longer. So if your website really depends on images, on image search in particular, then that's something where I would recommend going down the path of using a subdomain from your own domain, which you can then kind of route independently depending on the CDN that you're using. So if you use a subdomain, then those image URLs can remain the same for a longer period of time. And you can swap out the CDN in the back without as much problems. I don't know if that's possible with kind of the AWS S3 bucket setup, but that's kind of what I would recommend doing with images, especially if you're unsure of the kind of the long-term setup that you have with your infrastructure, which CDN do you want to use, which one is cheaper, which one is faster, those kind of things. So the closer you have your URLs on your own domain, the more likely you'll be able to just reuse those URLs for the long run. And that makes it a lot easier for when you have to change the infrastructure. OK, thanks. Sure. Hi, John. I've got a question about the new rel link for linking. I don't know if now it's int until now Google respect the nofollow directive. But when you are on an e-commerce website, you have a link for categories and link for attributes. I don't know the name in English, facet navigation with attribute. And in e-commerce, for example, for each attribute, there's a link to a page that have no interesting content for Google, but is useful for the user. And sometimes I just put nofollow to this sort of link. So Google, until now, didn't crawl this link. And now we've changed what Google will do. So it's not 100% defined, but the plan is to make it so that you don't have to make any changes. So that we will continue to use these internal nofollow links as a sign that you're telling us these pages are not as interesting. Google doesn't need to crawl them. They don't need to be used for ranking, for indexing. So it's not a 100% directive like robots text, where you say these are never going to be crawled. But it does tell us that we don't need to focus on them as much. So for us, the main change with nofollow and these new attributes is for outbound links, so from your website to another website. Within the website for these kind of faceted navigation, for categories and sorting, and things like that, that continues to work. OK, and for those attributes, UGC and Sponsor Z, what is the way as a developer, we must implement it? You don't have to implement it. We think it's a good idea to make it easier for search engines to understand which of these are, sorry? There's a reward for this to add for Google to understand more the content or it's just a hint for Google. And there's no incident on the ranking on the page rank just towards the other website. Yeah, it's essentially the same handling as with nofollow, except that you're telling us a little bit more detail of why you want to nofollow this link. So is this an advertisement or is this blog comments? So those kind of differences can make it a little bit easier for our algorithms afterwards to think about, well, maybe these are good blog comments, for example, and maybe we should focus on them a little bit more. But at least we have the ability to understand the difference of what you're trying to say. With the nofollow in the past, you're basically saying, I just don't want you to look at this link. And with the new attributes, you can tell us a little bit more why you don't want us to look at it. And again, it's for outbound links. So it's not something that, like, within a website, you really need to worry about. OK, and there's no way to use HTML sync for you to have a sidebar with only a section for comments and to be better understand by Google. So when you have a sidebar or a role comment, and all in this blog is the nofollow link or UGC link. So you have the HTML structure of a page with a different, because you have a section with a side for all the links. We have no impact with the content. OK, so something where you say everything in this section should be nofollowed or? Yes. Yeah. I don't think there are any plans to have a section or an HTML element level nofollow. So you would either have to do that on the page level, which usually doesn't make much sense or on the per link level, so that you mark up each individual link. OK, thanks. Sure. Hi, John. Hi. So two questions, please, if I may. Sure. First question will be, so a few days ago, three of my website, which are all domains, were rolled to mobile first index. And I was wondering if you just ran like another batch of the indexing for a mobile first index, or this is like, have you heard from other webmasters lately being rolled to mobile first index, or this is just my case? I don't think they pick just your websites. So this is something that we want to roll out to all websites. So we're doing that step by step as we can figure out that sites are more or less ready. We shift them over to the mobile first indexing. So the web is pretty big. And I think we have half of the search results moved to mobile first indexing now, probably a little bit more. So the team is still moving forward on that. Great, thanks for confirming. Second question, I don't know if it relates to this discussion, but I've seen the video Bartos did with you and Martin, which was just blowminding hearing those things. And my question was, so if I have a full JavaScript website, single page application, should I expect Google to be able to fully crawl my website in the future, also taking into consideration that lately Google both moved to the evergreen version? So should we expect like in the next few months to have Google able to crawl the website fully, also the JavaScript in a manner sense? So we're getting better. So from that point of view, I don't think we can say we will absolutely be able to index everything with JavaScript. And we don't have any specific timeline where we say by then we will be able to do that. I think there will always be specific edge cases that make it hard sometimes. So a really basic one is a lot of JavaScript-based web apps use one single URL for all of their content. You click around, and it stays one single URL. And if you just use one single URL, then we will be able to only index that one single URL, and we won't see all of the other content that you could load if you click on different things. So that's one kind of basic example where we can do a lot of things with JavaScript, but we can't make up URLs for your website that don't exist and index content like that. So from that point of view, I think we can get a lot better. We can definitely get into a situation where a normal JavaScript-based website that uses different URLs, that uses normal links, then that we should be able to crawl and render that and index it normally. I kind of see that as a reasonable expectation for the team. I don't think that will happen in the next few months, but we're working on it. But we definitely won't be able to just take any kind of JavaScript application or website and automatically make sure it will be completely workable in search. So that's kind of one thing there. Let's see. Something else I wanted to kind of add there. Oh, yeah, of course. The other thing is, if you currently have a JavaScript-based website like that, you probably don't want to be in a situation where you're waiting for Google to figure it out. So if you're currently thinking about search and you already have a JavaScript-based website, then there are ways that you can make sure that it works, where you don't have to wait. So if you have a business, if you have something that is important to be found in search, then don't just kind of say, well, Google will figure it out someday. Instead, just kind of follow the guidelines that Martin has been putting together at the JavaScript videos that he's been putting up. And make sure that your site works in search, because it's possible to make JavaScript-based sites work well in search. And it's also possible to make them work well in other search engines. So Google is able to do a lot with JavaScript sites, but not all other services and search engines are. And there are ways that you can kind of work around that. So those are, yeah. If I may share, so the technical solution we have is dynamic rendering. We built our own internal coroller. The internal coroller goes on the sitemap.xml and creates snapshots for all the URLs it finds in the sitemap. And then save those pre-rendered HTML version in the server to serve Google this version. But I do remember in the previous IO, you mentioned that hybrid rendering will be the future. So I wonder if it's still the case, because we have in mind to improve our rendering solution, like moving from dynamic rendering to either server-side rendering or hybrid rendering. Is this still the recommended solution? I think you can do it with dynamic rendering or the setup that you have now. So there are different ways that you can do it. And they also have different effects on search engines, as well as on users, kind of from a speed point of view. So I wouldn't just blindly switch from one technology to another just because you think it might make sense. But I would test it up and make sure that it works well for your users from a speed point of view. Sometimes with server-side rendering, you can do really fancy things from speed. And similarly with dynamic rendering, it's something that works a little bit different. But there are trade-offs in the different approaches. So I would try it out with something small to see what are the effects on your specific use case and does it make sense. OK, thank you very much, John. It was very helpful. I appreciate it. Hey, John. Sorry for interrupting again. No problem. So first of all, I have to say, I hope there will be more hangouts in the future. So we have a little bit more time to get all the questions answered. You're doing a great job. So the question I have is a follow up to the schema part would be the Google notice making review rich results more helpful. To make it short, maybe you can make it yes or no. Will this list be extended in the future, or is it final? I don't know. I would assume it's final for the moment, because these kind of changes get discussed a really long time internally. And it's not something where they will kind of like say, oh, we will just make a small list and then change it a few days later. They kind of want to have a stable state. Why I'm asking earlier, we hang out last week. So we were just asking if we may change the schema type from product to hotel for reviewing our products in a way. And yeah, actually, we are worried about investing more in schema in general. So can we make a change here? Should we make a change? And the question about reviewing the regions or the destinations, which was been asked earlier this thing out, we see that Google collects stars for skiing areas in the knowledge panel. For example, you can see the stars are there. So we think we were collecting them in the right way. And now I'm wondering if we can maybe implement a ski resort or something like that. Because yeah, we're not sure if this will be for the future. Yeah. So one of the things with regards to the blog post is that I think the types that we listed include their respective subtypes. So something like organization, I imagine, includes things like hotels. And maybe it even includes things like ski resorts, which are kind of like one clean thing where it's run by one organization. I don't know for sure. So I would double check that. But in the schema.org documentation, you can see the higher level types and the subtypes. And that's something that might make it possible for you to kind of pick a type that matches both what you're trying to do and that aligns with what we're trying to do. All right. Yeah. Cool. Cool. You also had, I think, a question about spam network. I think that was from you. Yeah, exactly. Can you tell me a little bit more about what you're seeing there? Yeah, each day we are seeing links that pop up from that network. So a lot of domains appear. And yeah, we're wondering, do they hurt? Don't they hurt? And we're uploading the disavow file, not on a daily basis, of course, but sometimes we think, do we have to? And we have to do it for a lot of properties. And that makes it even more hard. So yeah. And we have an agency that says, that hurts your rankings, or two agencies, which we're doing an audit. So everyone is thinking we should re-upload the disavow file, but I can't do it on a daily basis. Yeah. I don't want to do it. So usually with these kind of things, that's something that we can catch fairly well on our side. So you don't have to do that on a daily basis. Definitely not on a daily basis. So that's not something that should be affecting your site negatively. It sounds like you're really worried about this particular network, and maybe it's affecting a bunch of your sites. So what might be useful there is if you can send me some information about what you're seeing there. You can send it to me by email or on Twitter or something. Let me just drop an email address here. And then I can pass that on to the Web Spam team to double check. But in general, these kind of weird link spam networks, they come and go all the time, and they don't have any negative effect on the websites that get linked to. Sometimes it will be that they link to five spammy sites that they want to promote, and then 10 good sites that they want to make it look like this kind of spammy site is linking to some good sites as well. And that sometimes includes good sites where people don't know what is happening. But we have a lot of practice with these kind of networks, so we should be able to just catch that so that you don't have to do anything there. And again, I'm happy to double check with the Web Spam team on this particular case. Yeah, we're really sure this must be spam and must be annoying for other SEOs, too. So it would also be great if we could just remove them maybe from the disavow file, because it's already hundreds of domains, which make it a little bit more hard to check the disavow file, maybe. OK. Thank you very much for the email, and we'll send it to you. All right, thank you. Cool. Dan, there's a question about how do we prevent Google from being crawling or indexing our staging server? How to prevent Google from crawling the staging server? There are multiple ways. So people do it in multiple ways. I think the most important part is that you don't link to it because if you don't find it, then we can't call it. But sometimes it still happens. Ideally, what you would want to do is provide some kind of server-side authentication on the server so that normal users, when they go there, they would get blocked from being able to see the content. So that would include Googlebot. And you can do that, for example, on an IP address basis. You can do it with a cookie. You can do it with normal authentication on the server. Anything where you have to kind of prove that you're the right person so that you can actually look at that content. I think that's generally the best approach for staging servers because it means that. Any news over text to prevent it? You can also do that. That's the next part I was getting to. I think using authentication is the cleanest because it's something that means you don't have to change the normal settings on the site itself, so in particular, robots.txt. But also no index meta tags, for example, on these pages. Because it's a very, very common thing that you set up your staging site with a new design and you have a robots.txt saying, don't crawl any of this, Google, because I'm still testing things out. And then you push that staging site to production to make it live, and you accidentally include that robots.txt file. Or you accidentally include all of these no index meta tags. And then suddenly, your normal website is blocked from crawling or your normal website is blocked from indexing. There's no examples of that. Yeah, I think it happens to pretty much everyone. So it just happens. But ideally, I would just use authentication somehow with staging sites so that you don't have to think about the robots.txt and the meta tags. If you can't do authentication, then robots.txt is a good way to help with that as well. What will happen with robots.txt is if people are explicitly linking to your staging site, then we might index the URL of the staging site. So if you do a site query for staging.yourdomain.com, then maybe you'll see a bunch of URLs that are shown there. And they all have this snippet saying, we can't tell you what this page is about because we can't crawl it. So that's something that is a little bit confusing in the beginning, but it's not problematic. Because your normal website will rank for your normal queries in the staging website. We don't know what is there, so we don't show it in the search results. We can show it if someone explicitly knows that the staging site is there, but we wouldn't show it in the normal results. So I'd say ideally authentication and then robots.txt, if you can't do it any other way. And noindex, I would try to avoid doing that on the staging site, because it's really easy to accidentally push a set of changes with the noindex meta tag on your pages. OK, thanks. Sure. If one of our keywords is at the zero position and also shown in the fourth position, how will it be shown in Search Console? So in Search Console, we started one when it comes to ranking. So if your page is being shown as a featured snippet, which is what sometimes people call the zero position in the search results, then we would call that position one. And if it's additionally shown in other places in the search results page, then what happens with Search Console is we decided to count the average topmost position. So that means for a search results page where you're ranking number one and number four, we would count that as ranking number one. In particular, for your website, that kind of makes sense, because you're visible and in the first position. Similarly, if you're visible with multiple URLs from the same website in the same search result page, so perhaps your product page is ranking number five and then your home page is ranking number six, then what would happen there is on a site level, when you look at the queries, we would count that as ranking number five. So that's the topmost position there. If you look at it on a per URL basis, then we will show your ranking one, one URL at number five and one URL at number six. So it depends a little bit on how you look at it in Search Console if you look at it on a query level or on a URL level. But in general, when we count the rankings, we count the topmost position in that search results page. And we average that across all of the search results that are shown. Also, it's worth keeping in mind this is not a theoretical ranking, but based on what was actually shown to users. So what can sometimes happen is that your page is shown in the feature snippet for a short while and then afterwards is shown in the normal position for a longer period of time. And then you have kind of that number one position and then the number four position. And if you look at it yourself manually afterwards, it might be that you look at it and you see, only ranking at number four, why did Google ever say I was number one? And that ranking that we show there, the position, is really based on what we showed people at that time. And sometimes it can happen that very few people see it at position one. But because that's where we showed it, that's how we would count it. Let's see. Another one that is kind of an interesting question where probably we will go on forever. I have a bit more time, so if you all want to stick around a little bit longer, that's fine too. In your Webmaster Guidelines, the first bullet of things to avoid is automatically generated content. And yet, lots of sites do that. So it kind of goes on with an example, and you can find examples like this in finance. You can even find it in Google News. What's up with this guideline in the real world? So I didn't double-check that specific example that you have there. But I think this is a really interesting question, and I think it's something that will probably evolve over time. In particular, there are two aspects that I see that are kind of common. I don't know, common. Let's see. Maybe three. Well, let's talk about three things. So on the one hand, this guideline was primarily put up because a lot of spam is completely auto-generated, and we need some kind of a way to take action on that. And this is kind of the guideline that we chose to do that with that. So a lot of spammy sites will take a list of keywords, and they will automatically generate fake sentences make absolutely no sense, where when a user looks at those pages, they're like, there's nothing useful here, but there are ads, so they click on the ads. And the webmaster guidelines here give us a little bit more room to say, well, this is a completely auto-generated site that makes no sense to keep. Therefore, we will remove it from search. Another variation of this that we sometimes see from spammers is that they take existing content and they run it through an automatic translating machine, and they use the automatically generated translation and try to rank with that, where essentially the translation, again, is something that when a user looks at it, they would say, well, this doesn't make any sense. It's in the right language, but the content is not comprehensible. So that's, again, something that we would treat as automatically generated content and say, we need to take action on this. The examples that you're kind of pointing at are things where I think it is almost going in the direction of, well, you're just providing a table of data in a little bit more understandable way. So you could imagine things like a weather report in a sidebar where you could just show, like, the weather is this many degrees and it's cloudy, kind of in a table. Or you could take that information and just make a sentence out of it, which might be easier to read. And this kind of reformatting some amount of data that you essentially have and providing that in a way that is more readable, I don't see that much of a problem with that. I prefer to make sure that the rest of the page has sufficient value so that when people go to that page, they actually find value there, that they actually think this is something that is useful to them. After they search and the search results, they clicked on something and they found something that was useful. And that's the kind of thing where I kind of say, well, if it's useful to users, it doesn't really matter completely where it actually came from. So that's, I think, one aspect where things are a little bit different compared to a spammy site that just other generates gibberish content. The other more, I guess, future-looking aspect is there are various machine learning algorithms that try to generate text as well based on some amount of kind of starter information. And more and more, the examples I'm seeing there are that these algorithms are able to generate something that is actually pretty understandable and actually pretty useful, where if you feed it the right amount of information in the beginning, it'll be able to take that content and write it up in a way that is actually really easy to understand and provides a lot of value. I don't know if that's in that far now that I would say that this kind of auto-generated content is completely fine and nobody will kind of notice that it was generated by a script rather than a human. But maybe a few years down the time I found the road, I could imagine that that happens. And at some point, it'll be that when we manually look at these pages, you can't tell if it was written by a human or written by some kind of advanced machine learning setup. And at that point, does it really matter if it was generated automatically or not? From a user's point of view, it has the same value. So with that in mind, I think at some point in the future, we will have to revisit this guideline and find a way to make it a little bit more granular in that it kind of differentiates between these totally spammy uses of auto-generated content and the actually pretty useful uses of automatically generated content. So one example that I've seen before is with regards to earthquakes, for example, that I don't know if this is actually still running, but I believe at some point, this was set up by some news site or some government agency where if their sensors detected that a big earthquake took place in a certain location, they would automatically generate a page for that with some automatically generated content around that, which would be something that we could find in search. So before any human reporter is able to actually sit down and write something up and say, well, this happened in this location and it was this strong and likely had this effect, and if you're in this region, you should watch out for this and that, that's something that could be generated automatically and that does have a lot of value to users as well. So that kind of differentiation between the different types of auto-generated content, I imagine that's something that we'll see more and more coming up as a topic over the years. So I think, especially if you're looking at cases where like a site has 9,000 pages that are generated with a template and you look at those pages individually and you think, well, OK, they were generated by a template, but they're still actually pretty useful, then that's the kind of thing where I could imagine that the web spam team at some point would say, well, do we need to take action on this just because something happened to make these pages available or are these pages, or do we need to take action on these pages based on the content that they're actually providing and the value that they are providing to users or that they're not providing to users? So that's, I think, will be an interesting thing to follow. OK, I think we're slightly at time, so what I'll do now is just stop the recording. You're welcome to stick around a little bit longer if you'd like to continue chatting, but I'll stop the recording so that it's kind of a reasonable length. Thank you all for joining, and I wish you all a great weekend. And if you want to stick around, feel free to stick around. And otherwise, feel free to jump on in in one of the future Hangouts. Bye. Thanks, John, have a nice weekend. Thanks.