 Let me click this. All right, welcome, everyone, to today's Webmaster Office Hours Hangouts. My name is John Mueller. I'm a webmaster trends analyst here at Google in Switzerland. And part of what we do are these Office Hours Hangouts. And some people coming from the middle of the night, more or less, what can I do to help you? Sure. So I work for a product manager for a large website provider and digital advertising platform in the automotive space. And we're seeing some ancient things with cross canonicals and cross caching with our websites. Pardon me. So an example is I do a site call and search for one of our domains. Some of the results that are showing up show data from a different website on our platform. And then when we check these pages in Search Console, it's saying that Google chose a different canonical than the user. When I check the Google cache of those pages, it's showing the other website that it's pulling the data from. In many cases, there are the contents completely different. So for example, I have a page where the staff members are all filled out. They have staff members and pictures and phone numbers by department. Google is saying, nope, this is not the canonical. This page over here is, and it's for a completely different dealership in a different country. This one is across Canada and the United States. And the page that Google is selecting as a canonical is blank. There's no staff members on it. There's no other dealerships not even using that page. So we've been trying to chase this down and figure out why this is happening and what we can do to resolve it. And I'm just coming up blank. And here I am at 3 AM trying to get some help from you. So that's my situation that I'm hoping you can dive into. I did put a post on the product forums detailing this. So I'd need to take a look to see the specific URL. So if you can send me the link to the forum post, I think that would be really helpful. OK, I can let me drop it right in here. Sure. In general, what could be happening here is that we're trying to recognize duplicate content in different stages of our pipeline. On the one hand, we try to do that when we look at the content. That's kind of like after indexing, we've seen that these two pages are the same or almost the same and we can fold them together. But we also do that essentially before crawling, where we look at the URLs that we see and based on the infusion that we have from the past, we think, well, probably these URLs could end up being the same and then we'll fold them together. That usually makes sense when we can recognize clear patterns, especially within a website where we can see, well, everything in this subdirectory here is the same thing as in this subdomain because of the way that the hosting is set up. So we can just kind of blindly assume that these URLs are the same without actually looking at them. And when you're looking at kind of a website provider or platform type situation where you have one kind of hosting set up and you have different websites on the same hosting set up that are often kind of set up in similar ways. So I don't know about your site. There's certainly similar architecture across the, we have a lot of sites with, obviously, with automotive, there's a franchise model. So some of these manufacturers require that the dealer has a website with a certain provider and that those websites follow the exact same templates and such like that. So it's not even happening with the same, it's happening across different templates and it just doesn't seem to really make sense, particularly from a Google user perspective. If I'm searching for a service page for Chrysler in Toronto, but that service page doesn't rank because Google chose a canonical for a Toyota dealership in Florida, that means that that user has to get to the home page and then click to the service page instead of going directly to the service page. So I think what might be happening here is that we're seeing this pattern from some URLs and based on that, we're kind of extrapolating that it works for the other URLs as well. And that's probably something that we could fix on our side. So usually that's something where when I take these examples to the team, they'll see, go, OK, I think our systems pick that up incorrectly, we can fix that. Sometimes it's due to things like using shared IDs across different websites. So I don't know, you go to one newspaper website, you look at article, I don't know, 1, 2, 3, 4, kind of like with the ID. And you take that same ID and you can use that on different websites as well, and it brings up the same article. So that's something where our systems will look at that and say, well, this article ID is essentially shared across all of these websites, so we can just pick one and index that. And I suspect. Well, that makes sense. But so we have clients, too, that have two websites with us. And they do that because they want to get around the restrictions of having an OEM style. They want to be able to design everything themselves. And I can understand making the canonical decision between those two sites because it's for the same business. But when we're talking about totally unrelated businesses and different parts of the country, it just got me a little bit crazy. Yeah, I think this is something we can probably help with. I'll take a look at the forum thread and pass that on to the team here so that they can see what exactly is happening. And you have the URLs in the forum thread, I guess. Yeah, there's samples in there. OK, cool. Sure. Awesome. Well, thank you for taking my question. I really appreciate it. And I look forward to a follow-up. Sure. I'll see if I can post something in the forum and definitely pass that on to the team. OK, great. Thank you very much. Thanks for joining in the middle of the night. All right. Any other? It's hard to say. So in general, when we see a site move from HTTP to HTTPS and it's a clean move, then we can pass everything on to the new version. And that should be pretty straightforward. It should be within a couple of days. It should be essentially the same status afterwards as before. However, we do sometimes see sites make mistakes with HTTPS moves or with site moves in general. Things like changing the way that the robots text file is set up, changing things around no indexing, maybe missing redirects or not setting up redirects properly. All of these things, they give us a signal saying that, actually, this is not a one-to-one site move. This is something kind of unique. And we need to look at this in more detail. And then our systems generally tend to be a bit more cautious and generally tend to process these things a little bit slower. So we have to kind of relearn your website in a case like that. And that's something that we've seen is pretty common. We're looking into also setting up messages through Search Console where we send people explicit messages and say, hey, we see you're doing a site move to HTTPS, but you're missing something. You need to make sure that you're really doing everything correct. It's something also when I talk with SEO externally, some of them will say, well, my HTTPS moves are like, oh, completely pain-free and smooth. And others say, well, I work with sites that can't set up redirects properly or completely or change some things along the way. And then the site moves take a lot longer. So that's, I think, one thing. The other with regards to the indexed URLs, it's important in Search Console that you look at the correct version. So if you have an HTTP and HTTPS verified, you need to look at the version where the URLs are currently indexed. Because if you look at the old version, you'll see, oh, the index count is going down. You're like, oh, panic. And actually, if you look at the new version, you'll see, oh, it's actually going up there. So it's kind of just shifting from one to the other. So that's one thing to kind of keep in mind there. We're also looking in Search Console at ways that we can reduce this kind of confusion. Because it feels like, I don't know, a theoretical limitation almost where we say, well, HTTP and HTTPS are theoretically different sites. Therefore, we will force this theoretical difference upon all of the webmasters who think, well, this is all the same site. So we're looking at ways that we can fold those together and say, well, all of these are essentially the same site. If you have one verified, you have them all verified, you don't need to track them separately. We'll combine that for you, because that's kind of what you're trying to figure out anyway. OK. Because one thing I want to add is where if we see that the move is really clean site move, then it's a lot easier for us. But if there are any kind of small technical mistakes along the way, that makes it much trickier. Thank you. Thank you, John. Sure. There's a question in the chat. I have a question. I have known Google has a concept of beneficial purpose to the quality writer guidelines. Can you clarify the concept of beneficial purpose? I don't actually know. I haven't actually looked at those updated quality writer guidelines. I think it's also important to keep in mind that these quality writer guidelines are used by the teams that kind of rate the search results when it comes to reviewing different algorithms. And it's important for those quality writers to have the bigger picture in mind and to try to think past just this query and this page, but kind of look at the bigger picture to see are we really providing relevant results to users for queries like this. And it's not so much that they would go through and say, well, this website has this thing bad, therefore it will rank badly. It's more that they'll look at the algorithms that we give them and say, well, this algorithm is presenting more relevant results than maybe the other one. But I haven't actually looked at the details of what's going on in the new version. All right, let me run through some of the submitted questions. So we don't forget those. We moved from HTTP to HTTPS almost a year ago. And our ratings are worse. I think this is very similar to before, kind of to your question. So essentially that's kind of like your situation. So usually when we see changes like this, this is something where it's either due to just general issues with the site move. So like I mentioned before, in this case, it also kind of suggests that we moved a long time ago and now our rankings are much worse. If this is something that's more of a long term thing, then usually the technical change that you made there with moving to HTTPS would not be related to that. That's something where the long term rankings are probably more related to the website itself, to changes in our search algorithms in general. So for something like that, if it's been a long time since you moved to HTTPS and you're seeing bad results in search, then probably that's just our normal search rankings and not specific to that move that you did back then. The question goes on with regards to English queries in non-English or with non-English websites. So in particular, if you search for Game of Thrones and you have a check website that has information about Game of Thrones, how do you make sure that that can rank? How would Google handle that kind of language switch? So in general, we do recommend that you have one primary language on a page, but we can deal with multiple languages on a page. And sometimes that's natural as well. So like in this situation where you have Game of Thrones as a title, and the content might be in non-English. Another really common scenario is if you have something like a vacation website where you have vacation homes in Spain, and your website is in English, but it's about these cities in Spain, for example. Then obviously, all of the cities in Spain, they will have Spanish names, Spanish street names, and maybe Spanish restaurant names and things like that on there. So when we look at that page with our algorithms, we'll see there's a lot of English content here. There's also a little bit of Spanish content here, but it's not that we'd have to say, well, this is definitely a Spanish page. We shouldn't show it for any English queries, or it's definitely an English page. We should never show it for any Spanish queries. We do try to take into account those different languages that we find on a page. So in practice, it's not something that's impossible to rank for. I could imagine that our algorithms try to find the right balance there over time, where they try to figure out how much of this should rank for English queries and how much of it should rank for Spanish queries in a case like that. But that's generally something where we try to figure out the relevance of a page anyway. So in the case of the question here with a Czech website about an English TV show, I don't see anything by design that would prevent this website from ranking for those kind of queries. So that's something where I continue working on it. I think that's a good thing to do, to kind of bring different aspects together. And, Aaron, I think you shared something regarding also a site move kind of type situation where one subdomain is planning to move to a subdirectory and currently they have a no index. So this kind of goes into a similar situation with site moves, except that it's probably even a little bit trickier. On the one hand, if you have a no index set up at the moment and you want to redirect those later, then that's like another level of complexity that's added there with regards to moving. Because first we process the no index and we think, oh, there's nothing here to index. Usually what happens then is we reduce crawling of that content a little bit because we think, well, if we can't index it, we don't have to crawl it that often. So if you start setting up redirects at that point, then it's going to take a little bit longer for us to actually process all of those redirects and to move things over. The other complexity here is that you're moving from a subdomain to a subdirectory. I think it's a main domain, but the subdirectory. But I think it's like a subdomain to the main domain, a subdirectory on the main domain. Correct. Yeah. One site just got folded into a larger website and it was added as a section on the larger website, essentially. Yeah. So I think that's perfectly fine to do. I can see why that might make sense. What would happen there, though, is that we wouldn't see that as a site move. We would essentially process those URL moves individually. So that means, again, it's something that takes a little bit longer because we have to process all of those URLs individually. Things you could do to help us about with that is maybe set up a site map file for the old URLs and for the new ones so that we can recrawl the old ones a little bit faster with maybe a new change date in the site map file. And essentially, you wait until everything settles down. If the old part is indexable again, then usually that's less of a critical situation because we can either index the old version or the new version. And during this migration from one part to the other, it'll still be visible. If you have to no index it in the meantime, then I guess it's I'm going to quickly interrupt real quick. Those are two separate issues with one site. And I understand that now. The other site, that was a rebrand. And that was two years ago. And I'm sure it's been re-indexed completely over a year ago. And so whatever's happened with that one, it's just floundered. So it's not really waiting, I think, at this point because two years is quite a bit of time. Yeah. OK. But were you able to look at that one quickly? Or how do you mean? See if there were any problems with it. Any problems with it? I didn't look into the URLs of cell usually. So this is something that it has already migrated over? Is that correct? Or? Correct, two years ago. OK. I missed that. OK. And you're just seeing that the new URLs are not being indexed as well as they should be? Or not ranking as well as they should be? The site traffic has just basically been dropping off for a couple of years, and a lot of work put into good content and whatnot. So. OK. It dropped maybe a quarter of what it was before the move, and that doesn't, you know, and it's never kind of changed since then, and that's. OK. So at least we know it's going to be next time around, but. So usually if this is something that happened like a couple of years ago, then that would, shouldn't be associated with the site moving more. And that's essentially kind of the new normal situation for the site. But I can take a look at some of the details there. OK. That'd be good. Cool. OK. In that case, I misunderstood your question. All right. Cool. I'll do that. Let's see. Recommendations about lazy loading images while at the same time making sure the images can be crawled and indexed. Do image type maps resolve all potential issues? So the current recommendation we have for lazy loaded images is to put an image tag in a NoScript element below the kind of the lazy loaded element the way that you have it. That way we can definitely understand which images belong where they should be shown and we can process those and index those normally. That's essentially the current recommendation. I think we talked about this at Google I.O. earlier this year, but we need to write up the details as well so that it's a little bit easier to follow along what we actually mean there. And image site map can help us a little bit, but it doesn't solve the problem of giving us context around the image. So with the image site map, you can tell us that these images are within this page and we can associate those images with that landing page, but they don't give us more information about those images. So in particular, with images, you can give us an alt text with the alt attribute to give us a description of the image. You can have titles above the image or captions below the image, text around the image in general on the page, and all of that helps us to better understand how this image fits in. So those are all things that we would lose if we only had the image in the image site map file. So in particular, if there's something within those that you want to have found in search, in Google Images, then I'd make sure that you can give us as much of that additional context around that image as possible. Please help us to improve our search presence for our website. So for general questions like this, I would recommend going to the Webmaster Help Forum and getting some tips from the general Webmaster community to see what they come up with when they look at your website. Because looking at a website and just generally thinking about what can you do to improve it, it's such a big area. There are so many different directions that you can go into. And it's not something that you can just follow a checklist and say, well, I'll do these five things, and then you're done. The nice part about search is that we use so many different factors for crawling, indexing, and ranking. You don't have to get them all right. You don't have to do exactly the same as your competitors. So you can be a bit creative and try something else. And all of these things are things that people might be able to give you some tips on that you can look into and try out. Is there a risk for rankings if we place above the fold in the above the fold area on all of our articles on our website a standard message for users? The message is a call to action for subscriptions. So this sounds a lot like an interstitial or kind of a pop-up thing, so I'd be kind of cautious about that. In general, when it comes to search, we do expect to find some of the kind of critical content on a page in the above the fold area so that we know that when a user goes to this page from the search results that they feel like they've reached a place that is relevant for them, where they feel that the search results that we gave them actually help them to solve whatever problem they're looking for. Whereas if they go to the page and all they see is this big interstitial essentially saying, hey, you should subscribe to our newsletter, then unless they were explicitly looking for newsletters to sign up for, probably they feel that Google sent them to the wrong place. Is there a comment on the newer web designs? Yeah, I think it's something that is definitely worth looking into. So it's something I imagine from purely SEO point of view, the effect is fairly small that we can kind of ignore things that are in the boilerplate. But from a usability point of view, from a conversion point of view, these are all things where you're getting in the way of users actually using your website for what you primarily want them to do. And probably you're not selling subscriptions to the newsletter, for example, but rather you're trying to offer a service or you're trying to build a community of people who come to your site that look at your content, maybe click on your ads or buy something from you. And with all of these additional hurdles that you place in the way, you make it harder for people to actually follow along and do what you'd like them to do. I can't submit URL to the Google Submit URL tool. So I think we mentioned this on Twitter recently. We removed, I believe, the public, non-signed inversions of the Submit URL tool. We've just been seeing some problems with those tools recently, and we decided that the best approach here is just to be direct and say, well, this tool within Search Console is a way that you can do that. And that's what we can take your trusted submissions and process them in a cleaner way. So I'd recommend really going through Search Console and doing it with the Fetch as Google tool and the Submit to Index feature there. If you haven't verified your website in Search Console, that's a good reason to do that. If you can't verify your website in Search Console because of maybe your hosting setup, then I'd definitely talk with your host there to see what they can do to make that possible to actually get that verified in Search Console. We have a number of ways to do verification in Search Console. So if you can't, for example, add a meta tag to your page, maybe there's something you can do with the DNS settings. Maybe there's something you can do with that file that you can copy there. It might be worth looking into different options there. The media attribute for mobile alternate tags should contain max width 640 pixels. What happens if this is a different value? So from our point of view, we use this to try to recognize the mobile alternate URLs associated with the website. So we try to use this to figure out on a mobile device what content would be shown to the user there. And that's primarily something that we would process with the smartphone Googlebot when we crawl the page with the smartphone Googlebot. So if you're changing the values there, I would double check to make sure that with Fetch as Google and the mobile friendly testing tool that the smartphone Googlebot still gets a version that's mobile friendly. And if that's the case, then you should be all set. It's not, I believe it's not the situation where we explicitly look for that exact number and say, well, 640 pixels is mobile friendly. 641 is not mobile friendly. But rather, we look at it how a user with smartphone Googlebot type device would see it. Does Google differentiate strong and bold tags? Does it get more weight to strong or bold or ignore both? What about the EM tag? I think this came up really long time ago. And we looked at it back then. And back then, there was a slight difference. But in the meantime, all of these are essentially the same. So if you're highlighting content within a page and saying, this is particularly important, using kind of the normal semantic markup that you would do, then that's something that we would be able to pick up and say, well, OK, it looks like you're telling us this. I don't know, these couple of words here are really important. Then we can probably trust you with that. And that's similar with headings on a page where we can see, well, this is the main heading of this page here, or main subheading on a page, and there's some text below it, and probably that belongs together. All of these things give us a little bit of extra information about the content on the page. So I wouldn't worry too much about which of these tags you use to emphasize content on a page. Some people underline, some people use bold. It's kind of up to you. Um, how to remove the site name appearing in the title of a web page in, like, site, colon, example.com, queries. Not really sure exactly how you mean that. In general, one thing that sometimes throws people off is that we do have algorithms that try to recognize when it makes sense to rewrite a title on a page, not on a page, but in the search results, kind of the link that we have to your website to rewrite that. And that's generally based on things where we think that maybe the title that's specified on the page is not really clear. Maybe the title that's specified on the page is more like a list of keywords or perhaps a title on the page is really, really long, and we have to shorten it in some way or another. All of these could be reasons for us to say, well, we need to generate a title of our own that we can show people that when they're searching, they can recognize this is actually a really relevant page that I need to go visit. So that's something that is done algorithmically. There's no way that you can force Google not to do that in the sense of maybe using a meta tag or specifying like, this is really, really the title I want to have shown. The best approach that you can do here is really just to make better titles on your pages, which is sometimes tricky, especially if you have a really large website for your algorithmically generating titles yourself, maybe based on products or from a system on products or from a CMS that you use. That can be tricky. But the better you have really short descriptive titles, the more likely you will be able to use those directly. Sure. Yeah. Yeah. When I search for site code. Next. If that makes sense. I think the new search console indexing report I forgot what it's called. That's pretty much in that direction. So that lets you see things that are indexed normally. I think there is like a sample cutoff at some point. But that really goes into that direction where you can see, well, this is actually what is indexed. And these are things that are indexed normally. These are things that are indexed in maybe slightly different URLs that you have specified. I think that kind of grows in that direction. But yeah. Let's see. Is there any connection with the slow mobile speed ranking factor and the statistics in Google Search Console across stats? Generally, no. So when it comes to the mobile speed ranking change, we look at a number of different factors. Some are computed based on what we can determine on your web pages. And some are based on actual measurements that we've seen. So that's something that kind of comes together. In general, the crawl stats that you see in Search Console, those are purely based on crawling individual URLs. Whereas the mobile speed things are based on rendering the whole page. So all of the embedded URLs as well. So those are kind of two different things. That said, it sounds like from the question that you're seeing slow numbers or bad numbers in the crawl stats area, and you're kind of worried that these slow or kind of bad numbers in the crawl stats could have an effect on how Google sees the speed of your website in general. For the most part, that's not something that you need to worry about. However, having kind of slow crawl numbers there can affect how quickly we crawl and index content from your website. So for example, if you're seeing things like a couple of thousand milliseconds to crawl individual URLs from your website in the crawl stats area, then that to me suggests that something kind of weird is happening with your website or something really slow is happening with your website. And it would probably result in us not being able to crawl as quickly as we could otherwise. So if your website has a lot of new and updated content that comes out frequently and we can't crawl with that quickly, then we might miss some of that new and updated content. So that's something that, from my point of view, definitely worth looking at. Usually, that's completely separate from the ranking side, from the way that users perceive your website. But of course, if your website is so slow for crawling, then it might be that it's also slow for users as well. So that's, from my point of view, if you're seeing really bad numbers in the crawl stats area, I would definitely take a look at that and see what you can do to improve that. Even if your hosting is on the other side of the planet, we should be able to crawl pages fairly quickly. All right, another site migration question. Wow, I think we need to write up some more stuff about this. Sounds like. Let's see. A site is needing to be migrated in a subdirectory. And therefore, the site move tool is not available. Yeah, so that kind of goes into the previous questions as well. If we can recognize a clear site move from one domain to another or from HTTP to HTTPS, then that makes it a lot easier for us, because you can use the site move tool in Search Console. And if we can recognize that everything is really cleanly set up, then we process that fairly quickly. Whereas if you're doing things across subdirectories or subdomains, like mixing things together within a website, then essentially, it just takes a lot longer for us to process all of this. So you really need to set up the redirects properly. Really need to set up the redirects properly. You need to make sure that you're telling us about the old and the new URLs, maybe with a sitemap file. We have a bunch of information on site moves in general in the Help Center. So I would definitely go through that and kind of create a checklist and make sure that you're covering as much of that as possible. There are also some really fantastic checklists from other people out there. If you ask on Twitter, I'm sure we can dig up a bunch of really good checklists to go through there. But kind of with any site move, the cleaner you can make this happen, the faster it'll be processed, the easier it'll kind of transition from one stage to the new state. What's the most important thing to focus on when developing Angular 5 websites? Yeah, I think this focuses more on the development side. So I mean, things like having good content is probably taken for granted here. But that's definitely also something worth looking at. In general, when it comes to making a JavaScript framework based website, it's important to keep in mind how Googlebot can actually crawl and index that. So in particular, Googlebot can render JavaScript pages. So we can, for a large part, process these things. But you must check these themselves as well. So you should use things like the mobile-friendly test to check how we can render those pages. Within Search Console, there are different tools as well that you can use. In the mobile-friendly test, you also have the JavaScript console to see where things are getting stuck. If you're seeing an empty page there, that's worth looking at. With modern JavaScript frameworks, it's important to keep in mind that we don't process ES6. So we use, I believe, Chrome 41 for rendering, which supports ES5 but not ES6. And if your framework generates ES6 code, then we might not be able to process that. So that's one thing to keep in mind. The other things to keep in mind is that rendering is something that happens separately from crawling and indexing at the moment on Google's side. So if you give us a lot of content that needs rendering to be indexed properly, then that's something that can take a bit longer to actually be processed. So that's one thing to keep in mind. And if that's a problem for you, so in particular, if it's a news website, if it's a website that's generating a lot of really new and updated content, then you probably want to look into things like server-side rendering, dynamic rendering. We call it where you serve the pre-rendered content to Googlebot and other search engines. And users see essentially the JavaScript version. There is also a kind of a hybrid rendering approach that you can take, especially with Angular. I believe it's what is it, Angular Universal, where you can set it up that the initial page load is statically rendered. And then afterwards, it switches to the JavaScript version. So lots of technical details to keep in mind. We did a video at Google I.O. about JavaScript sites in Search. I would definitely take a look at that. That covers pretty much everything that I talked about in a little bit more detail. And that's something, from my point of view, it's not impossible to make a really good JavaScript framework-based website that works well on Search. But usually, it's a little bit harder. So it's something where you have, on the one hand, all of the advantages of using this fancy JavaScript framework. But you have a little bit more work when it comes to SEO, when it comes to Search, to actually making sure that this works well together. So that's definitely worth keeping in mind. I would also recommend, if you're a pure front-end developer working on a website like this, to also get some help from someone who is more traditional SEO, who can give you input on the basic things that front-end developers sometimes takes for granted. So things like URLs and internal links and the way that you structure content on a page. These are all things where traditionally, maybe a front-end developer says, oh, well, I can deal with this myself. But an SEO might look at that and say, oh, that will never work, or that will cause so many more problems down the road. And getting that input really early on can make it a lot easier to get on the right path. And be successful with a site that uses a fancy JavaScript framework. The next question I think kind of goes in the same direction. So also Angular website, I see a bunch of URLs indexed with a site colon query. But depending on the way I search, I see different site colon query results. So what I would also say here is, don't use a site colon query as a way of diagnosing the status of your website. Instead, use Search Console. In Search Console, you have a lot more information on the indexing, how many URLs are indexed, all that. So I would strongly recommend using that instead of the site colon results. Ooh, let's see if we can run through the rest of the questions in the remaining time. Question about mobile-first indexing, a question about mobile-first indexing, our desktop navigation has a lot of categories. And due to technical difficulties, our mobile version doesn't have as many, I think. Is it possible that we'll lose some rankings when we switch to mobile-first indexing? Yes, theoretically, it's possible. So if you have really a separate mobile website, so not just a responsive design where you have the same content just visible differently, but if it's really a separate mobile website and the internal linking is worse than your desktop website and we switch to mobile-first indexing for that website, we will use that mobile internal linking for your website in general. So if that mobile internal linking is worse, if there's really fewer connections, if we can't figure out the context properly, then theoretically, that could negatively affect the way that we index and rank your pages when we switch to mobile-first indexing. We do try to look into the internal linking on a website before switching a site to mobile-first indexing. But obviously, there are limitations to what we can do. And it's normal for all sites to have slightly different navigation on mobile. But if you really have significant differences there, then that's something I would double check. A lot of the tools to crawl your website also support mobile user agents now. So probably things like deep crawl and screaming frog, they will all let you double check what your site looks like when crawled with mobile versus what is crawled when it's crawled from desktop. And those are, I think, some really important tools, especially if you're aware of potential disconnects here. So I'd really try to clean that up as much as possible. We notice every time we make a change to our website, our SEO is affected. So I think that's kind of normal. That first part is like you make a change to your website, then obviously search will take a bit of time to kind of adjust to that. It goes on, says it takes a couple of days to pick up again, but we don't like these drops. The developer mentioned every time we make CSS changes, it also drops. I think these are all kind of normal situations in that if you make a change to your website, then that can affect your SEO. And that's something that a lot of people use to their advantage as well. It's like you improve your website, and you hope that your SEO, kind of the visibility in search goes up. So I see kind of like making changes and seeing that reflected in search as something that's kind of natural and should be connected. Do internal links matter? For example, one site has 5 million pages, and the other site only has 50,000 pages. Would the other site be stronger or rank better automatically? I think it's pretty much impossible to take a look at two sites like that in isolation, because one site with 5 million pages will be very different than a site that has 50,000 pages. So I don't think that's something that you can easily compare. On the other hand, if you take a site that has 50,000 pages and you just blow it up to kind of split all of these pages into separate pages and you make 5 million pages out of it, then obviously you're diluting the value of those individual pages, and they'll have a tougher time in search. So that's something where personally what I would recommend doing is trying to make sure that you're only splitting things up when you absolutely need to do that, when you have really different purposes on those pages, and as much as possible, kind of combine things together so that you have stronger pages. But sometimes you need to have different pages, and you need to keep them. So if you have news website and you have an old archive that has lots and lots of articles, then that's not something where you would gain value by deleting all of that, because then people wouldn't be able to find those articles either. The addition of irrelevant content, boilerplate for marketing purpose in the main content. Does Googlebot ignore that? Yes, this kind of goes into the question of the four, where yes, in general, we try to ignore that. We do try to see what content is above the fold, though. So if you're kind of shoving, moving everything down on a page through this irrelevant marketing content, then that could have an effect. Probably you'll see a stronger effect from the user's point of view, though, with regards to conversions, with regards to people coming back to your pages. Last German office hour, as you told me, there are some quality issues with my site. At the moment, I can see Googlebot crawling two times the same URL with the same user agent, sometimes a few seconds later with a different IP address. Can you tell me why? I don't know why. We have a lot of different IP addresses, and Googlebot crawls from variety of IP addresses. And I don't think there's any particular reason why we would crawl the same URLs repeatedly with different IP addresses, but it's completely normal to see different IP addresses used by Googlebot. Could you describe how to rank for a two-word combination like Mexico flag if the page is focused on and good at buying Mexican flag? When will Google show the page for more broader keywords? So this is something that kind of organically happens over time. It's not the case that there is something magical that you need to do on your pages to reach the broader queries. It's something that we learn over time, something that we learn over time, where we see, well, this page is actually pretty relevant for not just this specific query, but also some kind of broader queries as well. So that's something where if you continue to work on those pages and make them stronger, then that should just kind of happen on its own. This SEO is an SEO effect if heading tags like H1 and inline CSS is used to reduce the text size. So I think this goes into text versus code, ratio, and that's not something that we really worry about. So if you have a lot of HTML on your pages or a lot of JavaScript or inline CSS on your pages, that's not something that we would hold against you. The one limitation that I'm aware of is that there's a maximum, I think, page size that we use for search. So if you have, I don't know, hundreds of megabytes of CSS on your page before you actually show any of the content, then on the one hand, it probably doesn't make much sense anyway. On the other hand, it might be that our systems at some point say, well, we've indexed, I don't know what the limit is, 200 megabytes of content from this page. And we still haven't found any visible text that would be shown to a user. We're not sure if it's actually worthwhile to request more and more content from this web page to actually get some text. So that might be a limit worth looking at there. I don't know what the current limit is, but last time I checked, it was pretty reasonable in that as a developer, when you look at that, you say, why would anyone make an HTML page this large? If you're just inlining a little bit of CSS here and there, that's perfectly fine. That's not something I would worry about. I have a directory on a cctld for Australia. For example, example.school.edu.au, can I use hreflang to target different countries, like Canada? So yes, you can use hreflang to tell us which pages are relevant for Canada, but you can't geotarget those pages to Canada. So if that content is on a cctld that we recognize, like.au for Australia, then you can't geotarget content there for other countries. So with geotargeting, what generally happens is we promote that content slightly in the search results for that country when we can tell that someone is looking for something local. With hreflang, we don't change the ranking at all. We just switch out the URLs for the more relevant local versions. So those are subtle differences there. But from my point of view, it's something that you can kind of handle both ways. So if you feel that using your country code top level domain is what you really need to do, which I can imagine a situation like a school would probably make sense because you really want to show you're in Australia, you're in EDU. So you probably want to keep that. Then using hreflang alone is probably a good alternative, at least. It's not exactly the same geotargeting, but it can help to get those pages to the right users. So when you say localized, a good example might be on their school page is they're talking about this college and it talks about where it is. And then you would typically say, oh, this is in Canada, this campus. And that's kind of how that would work, typically. So I guess if the campus is in Canada, then that seems like a tricky situation. If you have an Australian domain name and you're saying, well, we're local in Canada, I could imagine that might be a situation where you say, maybe it's worth figuring out how to geotarget. But usually with hreflang, for example, what would make sense here is if you have content in maybe Australian English and in US English, like slightly different words, and you want to make sure that the right version is shown to the US users, then that's where you could use the hreflang link to say, this is US English, this is Australian English. And then we would swap out the URLs if someone is searching for the college name and we could show the right version. It could be similar across other languages, too. So if you have maybe a Spanish page about the college that you have in Australia, then if someone is searching just for the college name from that query alone, we might not know are they looking for Spanish content or English content. Where if you have the hreflang setup and we see that your browser or your search is set up to Spanish, then with the hreflang, we can see, oh, well, the English page would rank, but we have an alternate Spanish page, so we'll swap that out for the user. All right, let me double check to see what I've been missing in the chat here, so much happening. I mentioned e-news paper, so I have a reasonable need to close the website and redirect the URL to a new newsletter. Can you tell me if that violates the principle of Google? I don't understand the question exactly, so I think if you have different episodes or different versions of a newsletter like that, what I would do is generally have one general URL for this event, for this newsletter, and move the older versions into an archive. So instead of redirecting between the different versions, I would keep a more general landing page, and then as a new version comes out, then moving that to an archive and keeping that separate. I don't know if that goes into your specific question, but that's something that's really common, for example, with yearly events, where some people will go and say, well, conference2018.com, and then the next year they'll have conference2019.com, and that makes it really hard for us to understand which of these pages is relevant for just that conference. So if you have a generic conference name landing page, and from there you put the current version in there, and from next year you put the current version into an archive, and you put the new current version in there, that makes it a lot easier. I could imagine a situation with a newsletter-type website where you would do something similar. It's essentially the same as how it happens on a blog, where you put the new blog post on the main URL for the blog, and over time those blog posts, they move back into categories to enter the archive sections of the website. Yeah, I think we pretty much covered everything. Cool. All right, so I will be out the next two weeks, so we won't have one, won't have a set of Hangouts, I think next time, but I'll set them up for when I'm back. So if anything comes up in the meantime, feel free to drop it in there. Also, if you need faster help, I would definitely recommend going to the Webmaster Help Forum. There are lots of really helpful and smart people there who can help you with different issues, and, of course, there are other Webmaster forums as well with smart and helpful people, so I hope this was useful, and if you have any feedback, please let me know, and hopefully we'll see each other again in one of their next Hangouts. Bye, everyone. Thank you. Have a good weekend. You too.