 Hi, everyone, and welcome to today's Webmaster Central Office Hours Hangouts. My name is John Juer. I am a Webmaster Trends Analyst here at Google in Switzerland, or at home at the moment. And part of what we do are these Office Hour Hangouts, where webmasters and publishers can jump in and ask us any questions around their website and web search. They're a handful of questions submitted on YouTube. We can run through those. But if any of you want to get started with a question of your own, feel free to jump on it. Or if not, that's fine, too. Let me just see. OK, so let's refresh the list and see what we have for today. From an SEO point of view, is there any difference between posting all external links and citations in footnotes, like, for example, on scientific papers instead of putting them in text? Is there any guideline suggested? So practically, there is a difference there in the sense that when we find links within the content of a page, within extra context, it's a lot easier for us to understand what this link is about. So you could imagine we have one paragraph of text. And there's a sentence in there with a couple words that are linked directly. Then there's a lot of context for that link that tells us a little bit more about what is being linked to. And that helps us to better understand the page that's being linked from there. In comparison, if you had all of the links in the footnotes, essentially a block of links that are all together and no additional text around that block. As a human reader, you might be able to look at that and see, oh, there's a small number, and that matches a small number somewhere in the text. But essentially, it's completely separate. It's a block of links on its own. It doesn't have a clear anchor text. Then that's something that from our point of view would make it a lot harder to understand what those links are about. So especially if you're doing things like internal linking within your website, you want to make sure that you give as much context as possible for the pages that you're linking to. So from that point of view, I would strongly recommend just putting those links normally within the context in the place where users would find them as well and where users would be able to use them directly. So instead of users having to scroll to the bottom and try to search for a link and click on it, they can access it directly. Some sites like, I guess, Wikipedia is probably one of the more prominent ones, tend to do it with footnotes. And that's just the way that they do it. From my point of view, if you're making a normal website, then I would try to stick to kind of the trusted model of having links with clear anchor text on a page and those links being placed normally within the page itself. Does the MetaRobots Mac snippet also apply to the featured snippet? Does the data-nosed snippet attribute not count towards the ranking of the page? Well, what's up with those kind of meta tags? So the Mac snippet meta tag, robots meta tag, essentially tells us how long in characters a normal text snippet can be. And that applies regardless of where it's shown. So that's something, if that result is shown in the top in a featured snippet or somewhere in between as a featured snippet or as a normal blue link result, then the Mac snippet length is essentially the number of characters that we would show for that normal result. There's one exception offhand that I can think of. I'm sure there's something else I'm missing. But if you're using structured data to trigger a specific rich result type, then that would not apply. For example, if you have structured data to create a recipe and that recipe is shown as a recipe result in the search results, then it doesn't really make sense to count the characters there because these are different elements that all kind of build up this search result together. It's not one block of text that you can clearly kind of limit by number of characters. So if you have structured data and your page is being shown with that appropriate rich result type, then the Mac snippet length would not apply there. But for the normal search results, it would definitely apply. One thing also there to keep in mind is that with some types of search results, we need to have a snippet that we can show. For example, if you have a featured snippet and you set the Mac snippet length, I don't know, to one character or something else, then it's probably likely that we wouldn't show that as a featured snippet because with one character, there's not a lot of information that we can give. So from that point of view, if there's a specific search result type that you're aiming for and you want to use these new meta tags, then make sure that you're not kind of blocking that search result type from appearing. Similar with image results, if you say, I don't want to have any image preview shown, then we won't be able to show any image preview. So in any search result that depends on an image preview to function, that wouldn't be usable from that point of view. With regards to the data no snippet attribute, I think that's a pretty cool attribute that lets you specify within a page which parts of the content you don't want to have shown in a snippet. And that also applies to all places where we would show the normal text snippet. It wouldn't apply to structured data. So again, if you're doing a recipe and you mark up a recipe, then we assume the kind of the structured data that you specify for a recipe is something that can be used as a recipe. Otherwise, if you don't want to use as a recipe, then don't use the markups. But essentially, for any normal text preview that we would show, that's something where that would apply. And it doesn't change ranking. It really only changes what we would show in the preview. So if there's something important that's in there, we would still rank that page for that content. We just wouldn't show that in the preview. Let's see. It seems that the title tags have again been reduced. And now my title description up here is truncated. This is fine since I could remove a few words. And essentially, I think it goes to, well, why do you keep changing the title tag length? I keep working on my titles, and then you change the length again. I realize that can be frustrating, especially if you're fine tuning your titles like that. However, I don't know if it's always worthwhile to actually fine tune the titles like that. Because it's mostly because of two things. On the one hand, the title length that we display can change over time, like you saw there. And on the other hand, sometimes we automatically generate titles for your pages, depending on the query and the page, to make it easier understandable for users why this page is relevant for their interests. So that's something that can change algorithmically anyway. So you're not always guaranteed that you will see exactly the title as you have it specified on a page. So from that point of view, I'd be kind of cautious about always jumping in and saying, oh, the title is a little bit shorter in the search results. Therefore, I need to rewrite everything. I would kind of let it settle down. And if you see pages where you see the titles are significantly off from what you'd like to have shown, then that's something where I would focus on. And again, keep in mind we optimize the title based on the query. So just doing a side query and seeing what titles are shown is not necessarily representative of what the user would see. So I would double check the queries that lead to your site. You can get those in Search Console. You can look at the impressions. And based on those queries, try them out. Then see if the title that's shown for your pages for those queries is something that you think makes sense or not. And then based on that, kind of make a list of the things that you'd like to have changed. And work on those titles and try to make them a little bit better. And the same applies to descriptions as well. Descriptions are also generated automatically based on the page and based on the query that the user is doing. So doing a side query to see the descriptions that we would show, kind of that preview text, doesn't necessarily represent what a normal user would see when they search for something very specific for your website. So that's two things to kind of watch out for. The other thing just in general is that I think we'll continue to make changes in the search results as we see that it makes sense. We test these changes quite extensively to make sure that the things that we change make sense for the users. It's not that someone just goes in and says, oh, let's see what happens when I add five more characters or when I change the font size by two or three pixels. It's something that is really tested intensively. It's something where we test it together with the quality raters, kind of on a manual level to see if it still works out. And we test it algorithmically as well. And that we do kind of A-B testing with the search results all the time. But we have, I don't know, hundreds of tests running at the same time. And most of those tests we notice, well, this is not really working out the way that we expected or that we hoped it would happen. So we tweak the test. We try it again. And we try it again until we come to a situation where we see, well, actually, at the moment, based on what we see users kind of responding with, this algorithm, this UI change, makes sense. And maybe we should roll it out a little bit broader. So these are things that are usually in the pipeline for quite some time. And the teams here are constantly tweaking and trying to make things better. I think that's something that every website should do. If you're active online, the internet is constantly evolving. The user needs are constantly changing. The devices they're using kind of change over time as well. So you really need to constantly kind of be on top of things and A-B test and figure out where you can improve things. Sometimes incrementally, sometimes you can improve things quite extensively as well. Let's see. I seem to have missed that last pop-up. OK. We bought a domain from our competitor and redirected it to ours. Is there any way to find the old disavow file or to disavow new backlinks to this domain? Even though it's in Search Console, it's not showing up in the disavow tool. So there are kind of two things here. On the one hand, I believe the disavow file will be visible when you have it verified normally. However, to verify it so that you can use it in the old tools from Search Console, you need to make sure that you verify it as kind of an old style property. So with either the metatag or the HTML file or one of the other methods there. And based on that, you should be able to download the disavow file once you have that verified. I don't know if you would see a lot of value out of redirecting a domain from someone else's website to your website. But if you bought that domain, and especially if you're in a situation where you're saying, well, there is a lot of direct traffic to this old domain. And it's something where I think my site would be a good match, then that might be a chance to kind of look into that. But again, you'll find a disavow file in the disavow tool. And for that, you need to create an old style property. And you need to create the old style property that was used to upload the disavow file. So things like HTTPS or HTTP, dot, dot, dot, those are things that you need to keep in mind. You can just add all four versions and see where the file is. That might be an easier approach than trying to figure it out. Last month, I launched a new website and have been releasing content weekly on the blog, over 1,000 words weekly, which is getting good views due to social sharing. After one month, I'm not ranking in the top 100 for any of my keywords. And they tried Search Console, and they used the feedback link on the bottom of the search results pages to let Google know about this, but is there anything that can be done there? So I think, in general, you can't always expect to show up in the search results, especially with a new website, just because you're creating content that's out there. So that's something where it can take a bit of time for things to settle down, for our systems to understand. But actually, this is a pretty good website. And it can take a really long time, especially if it's on a topic where there's a lot of competition. So just because you have a new website and you're regularly creating content for it doesn't necessarily guarantee that you will be shown for all search results that you're creating content for. The other thing to keep in mind is, since you mentioned the over 1,000 words, we don't have any limit or any kind of guideline with regards to how many words per page or how many words per month you should be generating. Essentially, we try to make sure that we show relevant search results to our users. And sometimes those pages that we link to have a lot of text. Sometimes those pages don't have a lot of text. There's no magic number from our side where we would say, this is the number of words that you need to target. And then we will show you in the search results. So that's another thing to keep in mind there. I would definitely check out the SEO starter guide. We have that link in the Help Center. That has a lot of good information there. I think, especially if you're targeting kind of this commercial niche of insurance where it's probably pretty competitive, then I would make sure to try to get some help from someone who's a little bit more experienced with regards to making websites kind of do well in search. Sometimes you just need a little bit of tips. Sometimes you really need more long-term help if it's something where you really need to support a website for the long run and make sure that it's kind of competitive enough to match what others are doing in that space. Is there any way to tell Google, please crawl me more and ignore the duration time as the page gets slower? No, not necessarily. So what we have in Search Console is the crawl rate setting, which you can use to give us a little bit of information. But it's not something that would override our general systems where we would say, try to protect your server from being overrun. So that's something where, if our systems feel that your server is not able to cope with our crawling, then we will slow down because we really don't want to cause any problems on a website. We really want to make sure that we can crawl your pages, we can index them kind of reasonably, and that the majority of your traffic is actually for your users. It's not that you're making a website and creating the content, providing it just for Google. You're kind of creating it because you have users out there who you hope would be attracted to your content. So we want to make sure that your website stays available for them, rather than kind of running it to the ground to crawl every last bit. Let's see. Hi, John. Can I ask you the last question, please? Sure. Thank you for the answer. I definitely know about the crawl rate settings in Google Search Console. And I talked at Wednesday with Martin Split about this. But we didn't find any good solution. But the thing is that the page is very large, and you can click Read More in my comment. There is a lot of information. It is not only about the crawl rate settings in Google Search Console. OK. Yeah. OK. Go ahead. Sorry. No, no. Do you want to tell something? Oh, no. OK. OK. Thank you. Thank you. The thing is that the website is on Angular 1, and we use Chrome Headlines for dynamic rendering. And we have about 500 million pages, URLs. We only cache about six millions, and the others are rendered on the fly, like on the Chrome Headlines, and it takes about 1.5 seconds per URL. So it is very slow. And the thing is that when you crawl the set of six millions URLs which are cached and which are most important for us, it's OK because they are pretty quick. But in case the Google is some, I don't know how to express it, when you have enough of these URLs and try to find another ones, try to crawl another ones which are not in the cache, it takes about 1.5 seconds per URL. And it means that the mean of duration time or render time or crawl time is something like it goes really higher. So I know that in this case, it looks like Google both says to himself something like, yeah, maybe I'm the bad guy who is slowing the web server, so maybe I have to slow. But I don't want him to. I want him to crawl as much as possible because we have a lot of servers. But our problem is that the mean of the mean time for rendering it's about 1.5 seconds per the URL, which we can't get better with these technology like Angular 1 and Chrome Headless. We know that we have to change the technology to, I don't know, React or something like this. Maybe PHP or something. Doesn't matter. But the thing is that in this technology setup, I want to tell Google, crawl me as fast as you can. No, I don't think that that would be possible. No. So let's see. I think so in general, that's something where we would probably slow down. I would also not use the crawl rate setting in a case like that. Especially for really large sites, if you use the crawl rate setting, the setting maximum value is probably too small for what you need. So instead of setting it to the maximum value, I would disable that setting so that we can crawl as much as possible. I think you linked to the domain and the question. I didn't see that. It looked like the question was complete there. And looking at that, it is the case that we're kind of limited from crawling because we were kind of worried for probably speed reasons at the moment with regards to what we can crawl. I don't know when you switched over to the new setup. It looks like mid-September. We were crawling a little bit better. But it fluctuates a little bit. So it's really hard to say much there. But it's really something where we see the content. We recognize that sometimes it gets slow. So we limit the crawling that we do there to make sure that we're not causing any more trouble with regards to that. So one thing you could potentially think about is moving the kind of pre-rendered content to a different host so that we can separate these two configurations. I don't know if that would make sense in your case or if content would move between the kind of pre-rendered state and the on-the-fly rendering state. That might be something that would work. I don't know if that makes it any easier on your side compared to increasing the number of pre-rendered pieces of content. What I would also kind of double-check is the crawling that we do versus the pre-rendered content that you provide if that kind of is in line, not that we're spending all the time crawling pages from your point of view you think are irrelevant. That might be something to kind of look at. You could see that in your server logs where you pull out the URLs that we crawl and compare that to your list of URLs that you would have pre-rendered. But I don't see any other big, big approach there. Because if we see that things are slow, then we try to adapt and say, well, usually the speed thing is not something that we would kind of like say, well, your server appears to be really slow. But it's something where we'd say we want to limit the number of simultaneous connections that we have to your server. And if it takes, I don't know, two seconds to load a page from your server, and we want to limit the number of connections that we have simultaneously to, I don't know, 10 or some other number, then obviously there's only a maximum number of pages that we can crawl per second. So that's kind of where that speed aspect flows in. So if you need to serve these pages slow, then I don't know if that's really kind of the easiest approach there. Maybe another thing that you could do, depending on the pages and how you have them set up, is to have some part of it kind of server-side rendered and another part just dynamically included so that the server-side rendering doesn't need to do all of the work and can be done a little bit faster. I don't know how your pages are built up or how your infrastructure is set up. That might be an approach as well. OK. Thank you for help. Thank you very much. I will take a look deeper. Thank you. Yeah. Let's see. What else we have? That's anything that's been added. It looks like most people added their questions to the Friday Hangout, which is the top one in the YouTube list, which is not really the one for now, but that's fine. We can look at that on Friday. I want to move my best articles from the old domain to a brand new domain. Is there any correct way to do it? Will cross-domain canonicals help in this situation? Or should I no-index the old ones? I don't want to 3.0 on them, as I don't want to send any links or content quality irrelevant signals to the new domain. I think you probably have to make up your mind there with regards to old domain and new domain. So you can't forward signals to the new domain for these pages and say you don't want to forward signals, because either you're forwarding the signals or you're not forwarding them. It doesn't really matter if you're using redirects, if you're using a rel canonical, if you're using JavaScript redirect or meta refresh or anything like that. If you're telling us that one page has moved from one URL to another URL, then we will forward those signals to that new URL. So in that case, if you want to move some of this content from one URL to the other, then using redirects is probably the best approach. If for practical reasons, you need to keep it on two separate domains, then using a rel canonical is a good approach. But in both of these cases, we forward the signals from one URL to the other URL. So that's something just kind of worth keeping in mind in that you can't have both your old URL ranking with full power and your new URL ranking with full power if you're telling us that the old URL has moved to the new one. So pick one, whichever one you want. Personally, I usually recommend trying to keep things on as few domains as possible, on the one hand, for practical reasons, because you need to maintain all of this somehow. On the other hand, it makes it much easier for a good domain to be really strong if you have all of the content that you care about on one domain. But ultimately, that's up to you. Sometimes there are also kind of marketing things involved here as well, where maybe you're using a URL in offline advertisements. Then that's something where you might want to have that old content available on that URL, as well as having a rel canonical set up to the one that you prefer to have indexed. Let's see. Hi. Hi. So I have a question about translated content. For me, it makes sense that if you translate not automatically, but if you translate or translate content, it should be unique. But I see so many questions and people are kind of confused that they think that it can also be treated as punk content. So yeah, I wanted to run it by you. Is it unique content if you translate it? Like a real person translates it? Yes, absolutely. I think the easiest way to look at it is these are completely different words. So if you translate the content from one language to another, it's completely a new piece of content. It's not something where we would say this is a variation of the previous one. It's essentially a completely new piece of content. You can link the two with hreflang if you want it to do that, but you don't need to. Sometimes people just have a translated version of the content and have that completely separate. But it's definitely not seen as duplicate content. I think the one case where it kind of gets into the duplicate content discussion is when you have content that you're translating from American to British, for example. And you're just tweaking individual letters to match what the other side is saying. Then that's something where we might say, well, this chunk of text is essentially the same as that chunk of text. But if these are different languages, it's completely different content. Yeah, okay, thank you. And one more question about structured data. So if you have, for example, one kind of structured data which is invalid or has errors or spamming some kind. So for example, you have product structured data and structured data for organization and for posts. The only one of these structured data sets isn't valid. Does this mean that Google will just disregard all of them or only this type which isn't valid or spamming? So it depends on how we would recognize that and ignore it. If it's something that the WebStream team manually recognizes, then that's something where we've shifted to a more granular model where we can try to isolate that specific type of structured data. So if your organization markup is bad but your product markup is good, then you could get a manual action for just that organization markup. And then just the organization markup would not be shown in search. It's similar if the markup is essentially just broken where we can't process it properly and the organization markup is broken, the product markup is OK, then we would just have the product markup and we can show the product markup. So it's not the case that if one is bad that we would ignore it completely. It's something where we try to do it on a piece by piece basis. I think it would get tricky if you had multiple products across your website each with the product markup and some of those product pages have spammy product markup and some of them don't. Then the web spam team might still say, well, for this domain or this website overall, the product markup contains spammy elements, so we won't show any of the product markup. And then that would apply across all of these pages. But if these are different types of markup, then we would try to treat them separately. OK, yeah. Thank you, John. Sure. All right. I imagine all of the questions are lined up for Fridays, so it'll be really busy then. But more time for you all to ask questions live in that case, if there's anything. Oh my gosh. We run out of questions. Let's see. Yeah, I don't know. OK, one question for me then. Go for it. So there are domains which have not the comma at the end, but dot IO. And it's initially geographical identification. But you see that many tech companies, startups, they tend to use IO at the end. Does this mean that Google just treats these more like a general control-level domain or real control-level domains? If it makes sense. We have a list of country code top-level domains that we treat as generic top-level domains. I think that's in the health center somewhere. I do know offhand if IO is on there. I imagine it is. But things like CO instead of Columbia, lots of people use it for company, that's something that's on this list. A really simple way to double check if you have the website already is to use Search Console, verify that domain, and go to the international targeting tool and see if you can set an international target. If you can set the country target, then we treat it as a generic domain. You can target any country that you want. Or you can just say, I don't want to target any country. I prefer to keep it generic. OK, thank you. Sure. The same applies also for all of the new domains. And for all of the new domains, we treat them as generic domains, even if they look like country or location domains. So for example, if you have .berlin or, I don't know, .nyc is one, I think, then they're essentially sold as a location-specific top-level domain. But we treat them as a generic top-level domain because we haven't had any experience that this is limited just to one target audience. So if you have something like .nyc, and you want to make sure that you're targeting users in US to make sure that you check out the international targeting setting in Search Console so that you can really geographically target the users that you're looking for. OK, thanks. Cool. Wow, I did not expect that we would run out of questions. Let's see, one in the chat. If we block Googlebot, but do not mark our pages as no index, will our pages still be indexed in Google? In our case, we have a staging site that we would like to use some tools to crawl for errors, but they require the page to be marked indexable. So it depends on how you block Googlebot. Our recommendation for staging sites is to try to block Googlebot on a server level, so either with HTTP authentication or with IP address flight listing, that you let the users or the tools that you need to have access to your site have access and every block, everyone else. If you do it like that, then make sure that we can't crawl anything from your website. On the other hand, if you use a no index, then we might still crawl some of those pages from your staging site. Or if you use a robots.txt file, then it can happen that we index pages without crawling them. So we'll just index the staging site URL. And if you do a site query for the staging site, maybe we'll have some URLs from your staging site that we know about but haven't been able to crawl. So both robots.txt and noindex are two things that I would not recommend for staging sites. Instead, try to use either authentication, so username password setup, or with work with IP addresses that you're explicitly allowing the IP addresses of the tools and the users that you'd like to have access for these pages. So in both of those cases, we wouldn't be able to crawl as well, and that would let us kind of understand that this is not something that you want to have indexed. The other thing with robots.txt and noindex is that it happens extremely frequently, even for really large websites, that they push a version of a new website live that has everything blocked by robots.txt or everything with a noindex meta tag because they forget to double check that. And if you're using authentication or IP addresses, then it's really trivial to recognize that you left the wrong settings in place, and you need to fix that before kind of letting it run loose. I would have another question. Sure. I'm not sure if you will answer, but is the Google Update, the core algorithm update already over, or is it still running? Which update do you mean? The September call. I don't know how you call it. The core algorithm update, I think it is, yeah. I don't know, to be honest. Probably it's completely rolled out. Usually updates like these take, I don't know, a couple of days, maybe a week to roll out completely. But it's something where we tend to announce them when they happen so that people know what's roughly happening, but it's really hard to give an exact starting and end time because there are always so many other things that are happening in search. OK, thank you. Another question from the chat. Our web dev recently locked down access to unknown crawlers in Cloudflare, I guess. I've heard that Google will crawl sites using non-standard user agents to check for cloaking. Is that true? If so, is there any risk to our content getting crawled based on our recent changes? Is there anything I should advise our web devs to do to accommodate non-standard instances of Google? So for web search, when it comes to crawling and indexing content, we use the normal Googlebot. So if there's something on your site that you want to make available, you'll see us crawl it using normal Googlebot user agent. The exact user agent can vary a little bit. So especially on mobile, we try to match it to whatever device and settings kind of make sense for that. We're also working on adjusting the Chrome version in the user agent so that it matches more what we use for rendering. So that's kind of exact level will vary over time, or depending on how we access the page. Also, desktop and mobile have slightly different user agents, but they all have Googlebot specified directly in the user agent. So that's something explicitly to watch out for. And if it doesn't have Googlebot in the user agent, then it's not a crawler from Google Search. So if you're blocking everything that has Googlebot in the user agent, then you would be blocking Google from being able to crawl and index that content for Search. We do have a variety of other user agents that access pages. So things like for AdSense, we need to be able to access the page to see what this page is about so that we can show relevant ads. I suspect there are other similar crawlers as well. I think we have a list of those in the Help Center, or a list of the more common ones there at least. There's also for Search Console and for SiteMaps, I think separate user agents that we would use to confirm verification. If you need to have the site verified in Search Console, then make sure that you're not blocking the bot that checks that this site is verified. Or if you're using DNS for verification, then that's one thing less to worry about. And I believe for SiteMaps, we also have a separate user agent that fetches a SiteMaps file. So I'm not sure, actually, now probably also contains Googlebot in the user agent. That's easy to double check as well. But if you're blocking all Googlebot requests to your normal content and you have a SiteMap file, then probably you would be blocking all of the requests for the URLs in the SiteMap file that we tried to fetch. So that shouldn't make much of a difference there. But again, if you're blocking everything that contains Googlebot in the user agent, then you would be blocking everything that is used for Search. Obviously, that makes our search team kind of sad if you decide to block all of Google. But ultimately, that's totally up to you. And that's a choice that you can make. Sometimes it makes sense to do that if you have kind of different things that you're testing out and you want to let users take a look, but you want to make sure search engines don't, then that can definitely make sense. Or sometimes you might just say, I want my content online, but I definitely don't want any of it in Google at all. Then that's also your choice, of course. Let's see, I have a question. Past the core update, our impressions on Google Discover were reduced to zero. Does this mean that Discover has blocked us? Does Google Discover have a white labeling process similar to Google News? Google News is still giving us traffic. No, as far as I understand, Discover is a completely organic feature. Obviously, how and when we choose to show content in Discover is quite different to normal search because there's no query at all. And in that sense, sometimes if you write about content that people really care about, then that's something we could pick up and show in Discover. But it's not guaranteed that we would show it there. On the other hand, Google News, like you mentioned, is built on this setup where sites are kind of double checked and placed into the Google News corpus based on their submissions. And based on that, we would show them in Google News. So that's a completely separate setup from normal search and from Discover. OK, yeah, more questions from any of you. Feel free to jump in or let me refresh to see what else is on the comment. I think that's just the one that we just had about locking down access to crawlers. Was that about what you were looking for, Andrew? Yeah, we're not blocking Google, but we're blocking a bunch of other automated scrapers from accessing the site. Yeah. OK, so the other way around. Yes, yeah. OK, yeah. I think if you're not blocking Google, one thing I would do is double check the user agent and then do that reverse IP lookup, which we have specified in the Help Center, so that you make sure that you're kind of allowing the real Googlebot, not other tools and scrapers that use a fake Googlebot. Because there are lots of them out there that say, well, I'm Googlebot. You should give me all your content. OK, cool. I mean, we still have 10 minutes left. If people want to ask questions, feel free to jump on in. One thing I did want to mention as well, especially for this Hangout, is we recently announced a webmaster conference in Mountain View. So if you're in the US, in particular, in the region there, then check out the webmaster central blog and maybe sign up for that. It's a free event. We'll have a number of people from the search side there as well. We'll have some lightning talks, some Q&A with product managers there. I think it'll be pretty interesting. We run these webmaster conferences all around the world. And this is, I think, the first one that we're doing in the US. It's a little bit special for us. But it's not the last one that we're going to be doing, at least according to our plans. So if you can't make it there, then don't panic. I'm sure we'll have more opportunities as well. But I'll be going there. So if you want to drop by and say hi, check out the blog and sign up. Cool. Well, if there are no more questions, I guess we can close a little bit early. That works for me, too. All right. Oh, here comes one in the chat. For local businesses, building directory listings and citations, do you guys count links from these places like yp.com as backlinks? I don't think we will do anything unique for sites like yp.com. We do take into account links from all kinds of sites. We also ignore links from all kinds of sites when we think that these links are not that relevant for our algorithms. So it's not that we would have a separate set of links from this site should be treated differently because they belong into this specific category. It's more that for most sites, it makes sense for us to treat links like normal links. And if they have a nofollow, we treat them as nofollow. If they don't have a nofollow, then we can follow that. And for some sites where we're really not sure if there's value in using these links for search, we just ignore a lot of those links. But it's not that we would do anything special for directory sites or local business listings. Cool. All right. So I guess with that, let's take a break here. Thank you all for jumping in and joining us. I hope to see you all in one of the future hangouts as well. And I wish you all a great rest of the week. Bye, everyone. Bye.