 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangout. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hours Hangouts where webmasters, publishers, SEOs can jump in and ask all those questions, at least the ones related to web search. As always, if any of you want to get started with the first question, feel free to jump on in. I can stop. All right. Yeah, you know, I've watched an interesting video from Martin Split, you know, this new episode, first episode about SEO myth, pasting. Yeah, it was interesting. And you know, he talked about three the most important factors where webmasters need to pay more attention, it's content, performance, and meta information. Yeah, I understand about content. You need to have the best content in the world, and the best performance to satisfy all problems and decide all these problems. But what about meta information? Sometimes, you know, I can see that Google doesn't understand correctly, and you know, sometimes I don't understand how Google see this content because it's short title, 60 symbols, I think like this one, and 160 symbols for description. And how Google combine user intention with this short information, and perhaps Google doesn't need to show more information, Google searches out because people scan, they don't need to read, yeah, just it's enough. But you know, I mean, I need to use exact keywords or I need to reply some questions only in this short information. Can you give me more details? Yeah, I don't know, I think it's always tricky when you have something like top three ranking factors because things change so quickly and they vary from query to query and they vary over time. So I think the, I'm not exactly sure how Martin framed it in his videos, but essentially, some of these things are more relevant for some kinds of developers than others in the sense that a lot of times, web developers, when we talk to them, they're not aware of things like meta descriptions, meta titles, and things like that being that critical. And they don't make sure that they're visible, for example, if they're using a single-page app, if they're using a JavaScript framework, those kind of things, which is why we're explicitly calling some of these factors out in these videos when we're talking specifically for an audience of developers like that. So that's kind of where I think that's coming from. It's more like when we're talking specifically for an audience of developers like that. Now, so I wouldn't necessarily say that every website only needs to focus on those factors and then everything will be perfect. It's rather more that these are things that we've commonly seen being missed by people who are making websites like that. What about, do I need to use the keyword or I think like ring brain understands my content and title if it's clickable title for users. But sometimes I think Google are confused if intent of my title. Yeah, and a few times I rewrote title to use just a simple keyword if it helps people understand better this title. And yeah, Google after this started to rent but it's not clickable like first variant. It means it's better to, or it depends on situation. I think it depends, like is it. I think it's something that makes sense to try out especially if you're unsure. If people understand what your pages are about then try different titles. I've also seen people sometimes use ads to try that out where you can try it out a little bit faster and where you can explicitly specify a title and a description that you want to have shown and see how users react to that. So that might be another approach. In general, we do understand synonyms. We do understand general topics of things that you're talking about on your pages so you don't need to go out and put all variations of all keywords exactly the way on your pages as people might search for them. But of course you do need to be explicit on your pages and say this is a page about, I don't know, some specific car type, for example, rather than like a generic page where you're saying like, oh, this was a blast. And you have all of these kind of wordings on your page and nobody can really tell, are you talking about a car? Are you trying to sell a car? Is this a review about a car? What explicitly are you trying to talk about? So that's something where I do sometimes see sites kind of drift off into the direction of creating really fancy content that looks really nice. But for users and for search engines, it's sometimes really hard to understand what it is that they're really trying to say or what they're trying to offer to people. I think for the most part SEOs get this right because they think about the keywords a lot more. But especially small businesses, if they don't have someone who's really thinking about what queries might people be searching for to land on these pages, they tend to drift off into the direction of creating pages that look really nice but don't work that well for search because they don't mention what it is that they're really about. OK, thank you. John, I have two questions. One about the mobile first indexing and one about favicons. OK. Let's start with favicons. It seems to me that if you remove a favicon because it's violating the guidelines, the favicon guidelines, that it's similar to a manual action when you remove, I guess, rich results or rich snippets for violating rich snippet guidelines. I mean, right now it's very new. People are playing games and testing things and you guys remove stuff. Are there going to be manual actions or maybe a search console about removals with favicons? Are there any questions? I don't know. It is possible, but I think it kind of depends on what all happens there and how that plays out. To a large part, these things are meant to be handled in an algorithmic way in the sense that if we can algorithmically figure out that this is an image that is against our guidelines, then we can just filter it out. And then when you update your favicon file and we see that, oh, it has changed, then that's something that we could algorithmically pick up and say, oh, it's different now. We'll double check it with our guidelines. And if it's OK, then we'll just let it go. So in general, my preference would be to have something that just works automatically so that you don't have to do any kind of, I don't know, manual action and manual review type process. But I don't know, we'll see how things work out. All right, so currently, right now, it's not automated. It seems like whoever is complaining on Twitter and gets Google's attention might see. I don't know. I don't know. We do have a lot of experience with automated image recognition. I'm asking right now, is it automated or is it not? I'd assume that there are definitely aspects which are automated. Because I mean, we have a lot of practice with images in general. So that's something that from my point of view, there'd be nothing in the way of saying, well, we'll just apply the same knowledge to other types of images. OK, if your Fave icon defaults to automatically defaults to some generic Fave icon, and then you're like, wait, oh, this might have been bad. Let me go ahead and change it, and you do change it. How do you tell Google about that, or is that part automated? I don't know what the current process is there. OK. And my next question about mobile-first indexing, new sites will not get notifications starting July 1st if they're not mobile-friendly. Mobile-first indexing. They're not going to get that mobile-first indexing notification. But if they aren't mobile-friendly, and you are doing mobile-first indexing because they're the default, are you going to send a notification saying, hey, there's issues here? As far as I know not. OK. But you don't think, basically, in the blog post it said, generally most sites that launch these days are not experiencing any issues? Yeah. OK. So I guess you may if you see that change, but for now there's no notifications. OK, thank you. Yeah. I don't know, that's an interesting question. Like, if we should notify them, I think one of the next steps will probably be for us to figure out what kind of issues are remaining and to start notifying the old sites that have these types of issues. And maybe that's something where we could say, well, we might as well just flag all sites that have these issues, even if they're already switched over. OK. Awesome. All right. So let me take a look at some of the questions that were submitted. Is it possible that visitors from the Discover feed are attributed as direct none in Google Analytics? I don't know for sure. So I saw this on Twitter and I asked with the team about it. And from what I understand at the moment, it's not unified yet. It's not in the sense that all of the sources across the different Discover feeds that we have available that they're all shown in the same way in Google Analytics. In general, I'd assume that in the long run, these kind of visitors that come from this, essentially another search surface, have a normal search refer on them. It sounds like at the moment that's not the case everywhere yet, but that's generally the direction that things will be headed. The mobile-friendly tests and the inspect URL tool and search console return a lot of other errors for a lot of websites when it comes to resource loading. Most of it affects image files, JavaScript fonts, et cetera. Since the number of resources not being loaded correctly is changing a lot each try, this is pretty weird. So why does this happen and does it have any effect on ranking? Is there a way to correct these errors? So this is something which I think we need to make a little bit clearer in the tools, at least, in the sense that these tools are optimized more for speed, so more for users who want to pass the answer rather than for completeness. So what generally happens there is with these tools, we want to fetch all of the completely new versions of these embedded URLs, so the JavaScript files, images, fonts, et cetera. And that means we sometimes run into timeout issues. And when we timeout, then we show a generic error like this. When it comes to indexing, we do things a little bit differently, or at least for rendering within indexing, in the sense that we can cache a lot of these files. We can use older versions of these files and just render rendering like that. For example, font files are not going to change on a day-by-day basis. We might as well cache them for a longer period of time and just reuse them. So for general rendering and indexing, that's not something where you need to watch out for this type of thing. I do think in the testing tools, we should make it a little bit clearer so that you know what's happening here, or maybe even that there's an option to say, like, I'm ready to wait as long as I get a complete answer. But at the moment, at least the case is still that sometimes you'll see these other errors in the testing tools. You'll be like, what's happening here? You try it again, and then it works. If you see that it works, then generally, that's a fine. Everything's fine. Speaking about crawl budget for JavaScript websites, is crawl budget related only to the first wave of indexing, the HTML fetch, or also the rendering wave? So in general, crawl budget is related to every URL that we fetch from the server. So if there are multiple embedded URLs on a page that are needed to render that page, and we have to fetch all of them, then those are all included in the bigger group of crawl budget. And that kind of makes sense, because what we're trying to do is prevent overloading your server. And to do that, we try to limit the number of requests that we send to your server. That said, for most websites, crawl budget is not a topic that you really need to worry about. So there's no absolute number where I'd say above this many pages, you need to worry about that. But if you're talking about a website that has less than, I don't know, a million pages, then for the most part, you don't need to worry about the crawl budget. That's something where we can still index pretty much everything from your site without running into any limitations. If you do have significantly more, especially if it's an e-commerce site or you have lots of filters or lots of facets within the sorting and searching pages on your website, then that's something where you might want to watch out for crawl budget and try to find ways to optimize the way that search engines can crawl your pages and that they don't run into too many unnecessary pages that also need to be crawled and indexed. Because the site has both microdata and JSON-LD structured data, the Google Merchant Center has been reporting errors. So we removed the microdata, but the error rate has not dropped. After consulting with Google and doing the server logs, we found that it's a crawler who doesn't understand the website sometimes. I really don't know what you're seeing in Google Merchant Center. So for that, I really recommend checking in with the folks from the Google Merchant Center, which are probably the Google Ads folks. But I don't know what it is that you might be seeing there. I think the next question is essentially the same, also with regards to Google Merchant Center. Again, I don't really know what is being tested there. Language targeting, Google guidelines say this. If you have several alternate URLs targeted users with the same language in different locales, it's a good idea to provide a catchall URL for geographically unspecified users of that language. So I guess the question is how do you provide a catchall or should we just do an Higgs default? So essentially with hreflang, you have multiple levels that you can specify there. And from each level, if there is nothing that matches specifically that, then it kind of bubbles up to the next level. So if you have English for Canada, English for Australia, and the user from English US comes, then they'll generally bubble up and say, well, this is not Canada or Australia, but there's a generic English page. So we'll show the generic English page to that user. Similarly, if someone with a different language comes and it doesn't match Canada or Australia or generic English, then if there's an Higgs default specified, then that'll kind of be used there. So that's kind of the setup that works here. If you have one of the higher level pages and you don't need to specify the lower level pages, and in general as well, if you don't have anything that matches, it's not that we're not going to show your pages. We're going to kind of give it our best guess and try to pick one that matches fairly well as well as we can figure out what the user might be looking for. So essentially, when it comes to Hreflang, this is an opportunity for you to specify an alternate version for a specific set of users. And if you don't want to do that, then you don't have to do that. And you can specify a more generic version, but you don't need to do that. And like I said, if we can't find a version that matches, then we'll still give it our best guess, similar to if you don't have Hreflang at all, we have to deal with those websites too. You mind if I jump in here? Sure. Hi. So I have a kind of technical question about structured data. So I'm starting a product, it would be a product review website, and I'll give you an example. We'd have a type of article that's essentially a list of different reviews about related products. So let's say 10 best camping tents, and we would have 10 different products that we would review in the list, written by a single author. So my first question is, is it against the guidelines to have multiple review schemas on the page? Yeah, I don't think that would work, because we'd want to have the reviews for the primary content of that page. So if you have 10 unique things on the page that you're reviewing, then we wouldn't be able to map that review markup to one of those specific items. And similarly, we wouldn't be able to, say, use an aggregate review markup for that page, because these are 10 completely different things. It wouldn't make sense to have an aggregate review for that. OK. So would it be better to choose one of the products on that page and then have the review schema for just that one product? Let's say we pick the best product, and we put a review schema for just that product. Is that within what ones? That would work, yeah. Or if you split it out into 10 separate pages, that might also be an option. It all depends on what you're trying to do and how you'd like to have that look in the search results. So if we paginated it into 10 different items with, I guess, a different canonical for each item and then had a separate review schema for each product, that would be acceptable. That would be fine, yeah. And then also, is it better to have a product schema with review as a property or to have a review schema with item reviewed and then the product, given that I'm not selling the product on the site? I don't know. I'd have to look at the guidelines for that. I'm not sure. OK. And then the last question. So if I have an article schema and then the review schema, is there a way to connect them somehow or should I have just two separate scripts on the page, one for the article schema, one for the review schema? You can put both on the same page. The difficulty in general with structured data, if you have different types on the page that we would show in different ways in the search results is we'll pick one of those types and show it like that. So if you have two completely different things on a page, then it might be that we pick the one that you don't feel that strongly about. So generally, I try to pick the type that you really care about and primarily focus on that one and not specify the other type. So for example, if you have a recipe on a page and you also have a product review, because I don't know, maybe you're cooking something with that product, then we can't show both of those types in the same search results. So we'd have to pick one of those. And our algorithms might pick the one that you don't feel as strongly about. Maybe you say, well, this is really primarily a recipe page, then I would just mark up the recipe and not the product review. OK, does that mean that one could show up for one query and the other could show up for a different query, though? I don't think that would happen. I think we would choose a primary type for the page and we would just show that one. OK, perfect. Thank you. Sure. John, I thought review schema was only for product pages, review schema, rating schema, because I've seen the competitor put it on their category pages and that's supposedly not. It's against guidelines. Yeah, I mean, category pages would be the same thing, again, where you have multiple products that you're marking up. And that would essentially be the same thing. We wouldn't know which of these products you're essentially reviewing. It's not the same thing. It's the same product that you're reviewing multiple times. Right, but they do it and they get the stars being pulled through into the search results for a category page. Because I thought, well, that's a good idea. And then I thought, no, it's not real. Yeah, yeah. I mean, sometimes that happens, that we don't catch it or algorithms don't pick it up. That can, unfortunately, sometimes happen. But it's not against the guidelines or it is. I understand that it is against. Yeah, it's against now. Because it should be reflecting the primary content of the page. And if you have multiple primary contents, then you can't pick them all and say, well, they're all primary. Correct, OK. All right, let's see. A question about manual action for unnatural links that affect some pages on the site. No examples are provided and I didn't do anything, so how can I fix it? My recommendation there would be to post in the Webmaster Help Forum or check in with other peers who have done similar things or at least run across similar things. Oftentimes there are some things that can be picked up on that you might be able to work with. If these are unnatural links that are affecting just some pages on the site, then it might also be that the Webmaster team is just neutralizing something for you that you don't need to worry about. So I wouldn't panic directly. Instead, I would check in with other folks who can help you to look through the links to your site and think about is there really something that you need to take action on or not. We bought a business that operated a number of domains. Each representing a specific subcategory of the main topic, I'd like to consolidate all of these into the main site. Is there a limit to the number of domains that you can 301 redirect to your main site before setting off any alarms? There are 100 domains. And I'd like to 301 redirect them all to relevant section on our main site. I don't really see a problem with that. I think that's something where, for the most part, if these are really moves that you're making where you're taking a specific product page that was hosted on a separate domain, and you're putting that product page on your main site, then generally that's fine. If you're just taking 100 random domains and redirecting them all to your home page, then I could imagine our systems looking at that and saying, well, this doesn't look like you're moving content. You're basically just trying to take the page rank from these old domains and forward them to your new domain. And that might be something where our algorithms might say, well, we'll just ignore these for the most part. But if you're really moving content from one page to another page, if that's an external domain to your primary domain, that's perfectly fine. If we have a link in the dropdown menu, is it worth putting it one more time in the page content? What about the first link count? It's fine to put links multiple times on your pages. There is nothing really specific that you need to watch out for there. That's something where if you have a reason to put that link multiple times on a page, then go for it. Sometimes people don't find links that are within the menu somewhere. Sometimes it makes sense to highlight new and updated content in things like a sidebar or you have maybe something in the footer that you want to link to directly. All of those are completely reasonable reasons to put a link multiple times on a page. Should internal link anchors be diversified or 100% one anchor? Can a page be overlinked internally? You can do it however you want. So this is essentially a matter of your natural linking within your website. Sometimes if you have a shared menu, then you have all of the links with the same anchor text. There's nothing unnatural about that. Obviously, understanding a little bit more of the context of the individual pages helps us to understand how these pages can be shown in search a little bit better. So sometimes having a little bit of variety in the anchor text makes sense. But in general, that's something that you get automatically in that you'll have a link to a higher level category. And then from the category page, maybe you'll have a list of specific subcategories. And from there, you have links to the individual products. And all of these links can be slightly different across the website. And generally, that happens naturally. Can a page be overlinked internally? I haven't really seen anything like that happen. The only thing that I do sometimes see happening is that essentially a page or a site ends up going into the direction of keyword stuffing in that maybe they'll have a footer where they link to all pages on the website with a bunch of different keyword-rich anchors. And then it's less a matter of these pages being overlinked internally, but more that all of these pages essentially have all of this big collection of keywords on the bottom where, to us, it looks a lot like keyword stuffing on the page itself. So that's one thing I generally avoid. But otherwise, if you're building a website and you're linking things naturally within your website, then I don't really see a lot of reason for worrying about that. John, can I ask a question or do you want to keep going? Go for it. So the indexing issues, just two questions. One is the ones from last week. They were both about Googles having a problem indexing new or fresh content, correct? Oh, man. I don't know which ones were last week. It was just last week. Obviously, the one from April was like a month ago. That was specifically de-indexing issues, removing index pages from the index by accident. So de-indexing content. The ones from last week seem to be not being able to index new content. It's possible. I don't know. We have so many things happening that I'm not 100% sure. But I think that that might be correct. OK. And I guess related to that is, are you guys going to write up some post-mortem about the issue? Remember, we discussed you might be doing that. So you can share something that's going on with webmasters, SEOs, and developers. Is that still something you're considering or probably not? We're still considering it. I mean, it's something that sometimes is a bit tricky to put together, because so many different teams are involved. But it is something that, in general, we'd like to do. I think a lot of these issues are things that can happen to other websites as well. So there's always some learnings that can be shared, which I think would be nice to do. I mean, that's just about nice. It's also about knowing, is it removing content from your index, like older pages that were already indexed that might have data loss issues versus brand new content that never was indexed that took maybe a couple of hours longer to be indexed, or maybe a 24 hours longer to be indexed. And then there would be no data loss in Search Console because that stuff was never in there. Is this different ways of understanding how Google bugs or potential bugs may have impact in my website might be useful? Not just a learning thing for SEOs, but more of an analysis thing that people could look back at and say, these dates may have less traffic because of x, y, and z. And maybe that's important for them to understand. I don't know how much detail we would go into there, because a lot of these things, when things go wrong, they're usually based on our internal systems. So it's not something where it's that trivial to say, this was exactly that, and therefore you don't have to worry about it anymore. So that's sometimes kind of tricky. OK. The only reason I bring it up is because I keep getting people saying, I don't think that issue from last Thursday night was actually resolved on Sunday. I still think there's issues. It was resolved before Sunday. It was just at some point, we're like, oh, maybe we should actually tell people it's OK now. I understand that, but people are saying, no, it wasn't resolved because my site's still not fully indexed. Yeah, I mean there are lots of reasons why sites are not indexed or why sites might not see their content being indexed as quickly as they'd like to see it. And a lot of times, those are just completely normal things, which I don't know, all of us have seen over and over the years, and they're not tied to any specific issue. So as far as I know, everything is running smoothly and has caught up. So if people are still seeing issues, then that would not be related to any of the old issues. OK. Let's see. Let's say we wrap a vehicle that advertises a relatively new website. Users that see the advertised website are likely to do what most users do, type the URL into Google Search. I don't know. People usually type the URL into a browser, but some people type it into Google Search, I guess. Will their searches assist in getting Google's attention? In other words, will the site be made better known to Google by users searching for it? I don't think that would really make sense, because if they're especially looking for your URL, then we're already going to show your URL for that search. It's not that we can learn anything better for that URL that people are searching for. So in general, I think it's a great idea to advertise a website in other ways, other than just online, so it attracts people's attention. Sometimes people remember these URLs or the names, and they search for them, and they go there directly. I think all of that is great. I just wouldn't assume that there's any kind of SEO benefit of people searching for your URL and then clicking on your website. On the subject of creating multi-regional, multi-lingual sites, if we create pages for specific regions in their respective languages, should the URLs be written in that language, you can do it either way. For us, URLs are primarily identifiers in that we need to see a unique URL per language so that we can index those URLs in the individual languages. And if you want to use a URL that is written with words in that language, that's totally fine. I think that's a good idea, but it's not a requirement. There is no special bonus for having URLs translated as well, so that's essentially up to you if you want to do that. We do have that, I think, as a recommendation in one of our guidelines in the sense that it's generally good from a user experience point of view, but from an SEO point of view, it's not that you have any kind of special bonus for doing that. Search Console returning 404 errors for AMP URLs, but the pages are valid 200 and valid AMP. Does Search Console still face issues with AMP reporting? I'm not aware of anything specific there. So AMP pages are normal HTML pages, so if we see a 404 for a request of a normal HTML page, then I think that makes sense to show in Search Console. And that's also not something that would be related to the AMP content or not, because we wouldn't know that it's an AMP page before we actually get the content there. So it's not that we could do anything special for AMP pages and say these are AMP pages with a 404, because we don't know that they're AMP pages if we're seeing a 404. So my suggestion is if you're seeing 404 errors for any kind of URLs that you care about on your website, then I double check to make sure where that's coming from. In general, our systems wouldn't make up a status 404 for a page and say this is a page that was not found if there were any kind of internal problem trying to reach that page. You mentioned before that if you set up different subdomains targeting non-English languages, these can rank differently in the countries that they're targeting and could have much less visibility than the English subdomain targeting the US. Is it safe to assume that those additional subdomains are being evaluated differently from a quality and relevance standpoint from the core subdomain which targets English? Also, if you just translate the template and not the primary content, would Google see those translated pages as much less relevant than others with primary content written in the foreign language? So in general, the question about having localized content on a different subdomain or on the same domain or with parameters or whatever is essentially mostly a matter of just the competition in those languages in the sense that sometimes you can have a page in one language and it's ranking really well because it's a really well known page in that language. And just because you have that page translated into another language doesn't necessarily mean that that translated page will also rank really well for users of that other language. So it's not a matter of how many signals are being passed with subdomains or main domains or subdirectories. It's really just a matter of, well, this is a page that targeting Spanish users and this is a page that's targeting German users and they can rank completely differently even though they're talking about the same thing. So that's something that from our point of view is kind of normal. It's not that there's anything special with regards to the quality there. Essentially, these are separate pages. The search results are different in those individual languages, especially in the individual countries. If it's a geo-targeted page, if it's a page for a theme that people tend to search for on a local basis, then those kind of environments are completely different. So the rankings there can be quite different. Then there's a question about the jobs listings with regards to the spammy markup and some changes that were made there. I don't know specifically how the job listing pages are shown in search with regards to how those are ranked individually. So I passed that question on to the team to see if I can give more information from them. But I haven't heard back. Yesterday was also a holiday, so that sometimes just takes a bit of time to figure out. I'll see if I can try to find an answer for you there. I'm analyzing a site that is mistakenly publishing two canonical tags on certain pages. When inspecting the URL in Search Console, it says there isn't the user-specified canonical tag, so 0 instead of 2. Is that because Google is simply ignoring both canonical tags since there's a conflict? I don't know. It's hard to say without looking at specific examples. I suspect this would be easy to double check with just making a test page and putting two canonical tags on there and seeing what happens. Sometimes we do follow one of those, even though we don't show that explicitly. As always, with canonicalization, it's a matter of multiple factors aligning. So if, for example, we're confused with the canonical tag, but we see that all of the internal links are pointing to one version, or other factors are aligning with one version, then we'll just pick that one version. And that might be one of those URLs that you have specified as a rel canonical. So that's sometimes kind of tricky to follow back because there are just a number of factors that are involved with canonicalization. When combining multiple sites into one, do you recommend using the change of address tool? Generally speaking, I'd avoid doing that because a tool is really meant for forwarding signals from one domain to another domain, where you're really moving from one domain everything from there to a new domain everything on there. I imagine you're not going to break anything by doing that in Search Console, but I also suspect you're probably not going to see any advantage of using the change of address tool if you're not really doing a change of address, but more merging multiple sites. In India, some people also ask, answers appear from US websites, which are not relevant to Indian users. Isn't this supposed to be country specific? I don't know. Some of these features are country specific, and we try to do the appropriate localization there. Sometimes we just don't have enough information on a country-specific level, so we don't always show that on a country-specific way. So that might be the case with people also ask questions. I don't know. I haven't really looked into that specifically. Then there was a question about commas in large numbers in the description. I'm happy to take a look at those. My suspicion is that from the snippets team that is working on this, that they just do this by design to kind of unify all of the numbers that they find there. How can I track sessions for my UTM tag, local SEO business? It's not available on analytics. All those metrics, such as impressions, are available. I don't know how you would track that in Google Analytics. So that would probably be something you'd want to check with the analytics forum or with one of the Google My Business forums out there where you can see what other people are doing. If we have an established site in the site to add a blog to it, would it be a bad idea to put the blog on a subdomain? Would a subdomain be considered a new website? I don't think it would be a bad idea to put it on a subdomain. I think it's really a matter of what you're trying to achieve there. If you're trying to just expand an existing website, then maybe put it within your main website. If you're trying to add something kind of in parallel to your website, then having it in a subdomain is perfectly fine. From our point of view, when talking with the indexing teams, they consider subdomains to be just fine as a way of adding content to a website. Talking with external SEOs, they're like, oh, subdomains are evil. You should always use subdirectories. So from my point of view, I think you'd be fine either way. And I'd look more into what makes more sense for you, what makes more sense from a management maintenance point of view, from a tracking point of view, from an analytics point of view. Sometimes it makes sense to have everything together in one unified location. Sometimes it makes it a little bit easier to separate things out a little bit, totally up to you. Does that affect my rankings? A few weeks before, I have social bookmarked my site, and now it's marked me as spam even after I haven't done that. I don't really understand the question. It sounds like you added your site to a social bookmarking service, and then the social bookmarking service marked your site as spam and removed it. I suspect we probably ignore that from an SEO point of view. But at the same time, if you're trying to build links to your website by using social bookmarking services, I suspect that's not going to be that fruitful. And that probably makes more sense to focus more on your website first and make sure that it's really fantastic website. And then trying to make it so that people really appreciate your content and that they link to your site on their own rather than you going to the social bookmarking sites. So essentially, I imagine most surgeons have seen these social bookmarking sites and pretty much ignore all of them. We implemented organizational structured data on our site. We implemented local SEO. Is it advisable to also implement local business structured data, keeping in mind both the above structured data have almost the same content, such as address, business name, et cetera? I don't know what you mean when you say you implemented local SEO. So that's kind of hard to say. With regards to local business content, organization, structured data, I think that's something that you can combine. So I don't see much of a problem there. I suspect for the most part, we use these to just better understand where your business is located. So if you have a Google My Business listing already, then that kind of maps to your location as well. And with regards to things like phone numbers, which we could show in search, we can pick that up from structured data as well. Additionally, also the opening hours, of course, which we can also pick up from structured data there. The one thing I just keep in mind with all of these different ways of specifying this information is that it should all be in sync. So for example, if you update your opening hours on Google My Business and you don't update them on your website, then suddenly we're kind of in a conflict there in that we see you updated it here and your website says something different. And which one do we choose? We might choose one, we might choose the other, we might switch in between. So that's something that makes it kind of tricky. So what I'd recommend in a case like that, where you're not sure how you will update this data, is to focus on just one location to specify that data so that you have one place where you need to update it. You can kind of keep that in mind. You can focus on that one location, update that location, and then all of the signals that we have are updated automatically. Whereas if you have multiple locations where you're specifying the same thing, which could be multiple types of structured data, it could be multiple kind of home pages that you have where you have the structured data specified, or the home page, Google My Business listing, maybe other external listings as well, then you have to keep in mind that all of these should be in sync as much as possible. So that affects the opening hours, things like phone numbers, addresses. They don't change that often. But if you do have changes there, make sure that all these locations are updated so that we can keep your listing in one clean state rather than being in the state of being uncertain which data source is the correct one today. I think that's pretty much it for all of the questions that were submitted. Let me see. In the chat, there's a little bit happening, but I don't think there are any specific questions. Oh, here. Does Googlebot still convert the hash bang into question mark escape fragment? I don't think we do that at all anymore. So we really render the hash bang URLs directly, and we try to index those directly. So I don't think we use the kind of pre-rendered version that you can specify with escape fragment. It might be that others still use that. So maybe if you share something on Twitter or on Facebook, if they still use the hash bang setup, then that might still make sense. But in general, we deprecated that on our side quite a long time ago. So if you're still using that setup, I'd really recommend moving to something that's more URL-based rather than hash-based. Hey, John. It's actually my question, and that's the reason I brought it up. We're actually trying to move to a more URL-friendly structure. So my question is, what would be the best way to set up those redirects, since you're saying they're not using the escape fragments? And this is, can't really do service ad redirects. How would you recommend doing it? You'd have to do that with JavaScript. OK, so window.location.replace. Is that the recommended methodology? OK. OK, I'll try that out. Thanks. Cool. John. Hi. I have a question that I put in the comments, but you didn't get to it. It's regarding streaming content, basically long-running responses, if you will. So with the async frameworks that are available now in Python, and I guess Node.js and stuff, you can stream the content progressively to the page so that the above-the-fold content, for example, can be fed almost immediately. And then content that requires IO, like database lookups, can come at later points. My question is, how does Googlebot handle that? And what are the impacts on page speed metrics? I suspect we handle it badly. I'm not sure. I need to test that. But probably what happens is we try to render the page, and at some point, we take a snapshot. So if you're still streaming the content at that point, then it might be that we miss the rest of the content that you're still streaming. So I suspect that that would be kind of tricky. So what I generally recommend doing there is making sure that the initial content is available as quickly as possible. And then if you're streaming additional content, then that might be something where you say, well, this is not critical that it needs to be indexed. Therefore, I'll just continue doing this because it works well for my users, and I don't need to have it indexed for search. Then that might be something where you say, well, that's a trade-off I'm willing to do. But I generally test with things like the mobile-friendly test with the Inspect URL tool to see what comes up there. I suspect it might also start working better in those tools in the future because with indexing, we switched to modern Googlebot version. But for these tools, we're still using, I think, Chrome 41. So if you're doing something that doesn't work in Chrome 41, then it might be that in these tools, you don't see it yet. I was very happy to see that there's a switch to the modern browsers that makes, I think, a lot of people's lives a whole lot easier. I have another real quick question unrelated. I have a site uses a top-level domain that's niche-specific. It's .run. And the content on the site is all US-based content. Well, US-Canada-based content. But it seems that most of my traffic right now is coming from anywhere. But Canada's not too bad, but I get very little US content. I get a lot of UK, Australia, Sweden, a lot of European users. And it lands on pages where the content isn't really specific, but it doesn't, so to speak. If you understand what I mean. In other words, the users will come. They see that there's content there, but it's not relevant to them afterwards. And so they drop off. Is there a way to influence that? Or somehow, is there something? Not directly. So you wouldn't be able to say, don't show my pages in these countries. If you had multiple versions of that content, you could say this is the version for UK users and this is the version for US users. But it sounds like you just have that US version. Yeah. So there's nothing really specific that you can do there. From a search point of view, I don't see that as being problematic, essentially. If people come to your site and say, well, this isn't what I was looking for, that's perfectly fine. The problem is that the people who should be looking for it aren't finding it. And the people who are looking for something similar but not quite are finding it and then you know what I'm saying. Yeah. I don't know. It sounds more like maybe there's something that you can tweak with the content on those pages to make it clear which locations you're specifically targeting. It's not that there's something technical, like a meta tag that you can tweak and say, this is more for US and less for UK. OK. All right, thank you. Sure. All right, it looks like we're kind of out of time. And I see there's one more question in the chat. Oleg, if you could copy that into the Hangout listing, then I can take a look at that afterwards as well. Sure, we'll do. All right, great. Thanks, everyone, for joining. Thanks for all of the questions and comments along the way. I hope this was useful. And as always, the next Hangout I think is on Friday. So if you want to drop in there, that's perfect. I'll set more Hangouts up towards end of the week as well. So if that time doesn't work, then we'll find a different time for you. All right, thanks a lot, everyone. Bye. Bye, Joe.