 All right, welcome everyone to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do is talk with webmasters and publishers like the ones here in the Hangout. And it looks like there are a bunch of questions submitted already, but there are some new faces here. And for those of you who are kind of new to these Hangouts, and if you have a question that you'd like to get answered, feel free to jump on in and let us know. Hi, John. I'm not that new, but I've got a quick question if that's all right. It's a real specific one about HTTPS migration. There are certain things, obviously, that people can do to help Google kind of see that the site's migrated quickly and re-crawl it. And one of those things is to do a site map. So you can, for example, maintain your original site map to your original HTTP pages. And then when Google crawls that as usual, it will see the 301 redirects and kind of jump across. Now, I was just wondering if you had any, your personal, there's a couple of ways of doing this. One thing you can do is you maintain the existing property and search console so you have an HTTP site map still, pointing to an HTTP site link still. And when Google crawls those in a business as usual kind of way, it will find the 301s. But something else is recommended, or maybe you can do both, is you create your new property and search console, which is the HTTPS version of the site. You put your HTTPS version of the site map in there, but you also, at least temporarily, put your original site map in there that points to the HTTP links. And I think different people, I might be wrong here, but different people over time at Google have recommended different ways of doing this. And I just wondered if you had it, which way would you specifically propose would be a good way of doing that. Yeah, so I guess the general thing, especially with site moves, so that's what would be the case with HTTPS migration, is that you can do something that's actually kind of not recommended for site maps in the sense that you're submitting URLs that you know you don't want to have indexed like that. So you're submitting the old ones. And whether you submit those on your old site or with the HTTPS site, that doesn't change. As long as you're submitting a site map for the old URLs, then we can go off and recrawl those old URLs based on the new change date that you give us. And we'll try to crawl those a little bit faster, recognize the site move, and pick that up a bit faster. So it doesn't really matter where you submit those. But if you send URLs in a site map file with a new change date, then we'll try to recrawl those a little bit faster. Great. And if we happen to leave it in the old property, but we also add the site map to the new one as well, does that matter if you effectively have a duplicate site map across both? It's just helping Google, is it? Yeah, that's totally fine. Sometimes it can be a bit confusing for you as a webmaster. If you look at the new site, and you see two site maps, and which one is the right one. But it's totally up to you, which way you want to do it. Brilliant. I mean, in case it's just it's coming up to the sales time and holiday season. But at the same time, Chrome is going to switch in January. So probably there are quite a few people now going, oh, god, should I do it now? But if I do it now, will my rankings temporarily drop for a few days? And how can I keep that as short as possible? So that's the kind of reason that I'm asking these questions. Yeah. So I guess from a timing point of view, my recommendation would be to do site moves, changes like this at a time where you're not dependent on search traffic. So that's something where if December is the period where you get all of your sales and a large part of your visitors come in from search and you do a site move then, then any mistake that you do there could have big consequences. So that's something where maybe I'd wait until, I don't know, January 1st or whenever that kind of black hole period comes where people don't go online and do the site move then to kind of minimize the disruption that you have from search. It's kind of like actually this big sales period in the UK is kind of first of January or kind of boxing day onwards. So it's kind of like a month or so time from now. But the thing is that if we don't do it now, then Chrome switches in January and then you kind of have to wait till March. So it's, you know, it's better than I want to do it. I don't know when Chrome is going to start doing that. I believe from what I understand, the switch in Chrome is only for login pages. So specifically, if you have a login on your site and it's on HTTP, then they'll flag that as insecure. So one thing that you could potentially do, I mean, I'm not on the Chrome team, so I don't want to kind of like bring workarounds. But what you could potentially do is just put your login on one page and link to that login page instead of putting the login form on all of your pages. Maybe that's like a temporary solution. I mean, that's how I understood it as well. But I mean, we already have HTTPS on the site partial on checkout. We just don't have it on log on. So yeah. But I mean, in your experience, I appreciate everyone makes mistakes. But if you did a great job of it, you did the site maps thing, you did your canonical thing, you did whatever, theoretically from a rankings point of view, four weeks, five weeks should be. It should happen fairly quickly. So in my experience, it's a matter of a couple of days when everything works well. But this isn't something that I try to do during a time when you're dependent on search. If you're really dependent on search, then maybe find a way to scoot that off until later or find a way to kind of work around this temporarily. But it's something where I would say 99% of the cases, everything just goes well. But you never know if you're like that 1% of the case where your host has decided to mess something up that you haven't prepared for that kind of throws everything off again. So that's, I don't know, the kind of thing where I would, from a practical point of view, especially if you're dependent on this traffic, I'd kind of be cautious. OK, brilliant. Thank you. Can you highlight the speed thing you were talking about on Twitter? I think it's, wow, I mean, that was, I really like that. I mean, so two to three seconds, and if you go over five, you lose rankings, right? I don't think we have any numbers for when we would use that for search. My tweet was mostly for that one person that asked specifically, and that's kind of what I would aim for if I were a website. So that's not that from a search point of view we would aim for that. It's also from a search point of view we wouldn't be able to get those numbers exactly the same anyway, because we kind of try to cache the embedded content, and the cached embedded content will have an effect on how quickly a page renders. So it's not that you could even measure how we would measure that. But that's kind of from my point of view, from my experience, I'd aim for something like that. When I see sites with hundreds of embedded resources and they take 20, 30, 40 seconds to load, then that's usually a time where I imagine a lot of people just jump off and go somewhere else, especially on mobile. So shared hostings are in big trouble because a lot of them are over five or six seconds. I mean, I took a look at hundreds of websites over my time. Yeah, I wouldn't say they're in trouble. I mean, we're not changing anything there with regards to speed. So this is really just my kind of personal view of what I would recommend to aim for if someone were to ask me, like, what number should I aim for, John? I really don't know, no idea what to look for. And we've said, I think, below one second at some point. But that's really, really hard to achieve, below one second. But it's 26 seconds and it's nusked up on mobile. It's going to translate to even eight, nine seconds with the 360. Yeah, I mean, that's still hard. But when you look at the studies where people are measuring the attention span of someone going to a web page and you see they kind of switch off mentally after three or four seconds, then that's kind of the time you have to grab their attention and give them something that they find interesting. So it's something where I'd look more from a user point of view than from a search point of view. But from a user point of view, anything that you can do to speed things up definitely makes a big difference. John, but from a search point of view, you're definitely looking at the pre-render time, right? The time it takes for the server to respond or maybe time to first buy it or things like that? I guess there are two aspects there. Or crawling purposes. Yeah, I mean, that's not directly related to ranking, though. That's essentially if we have a big website and the server is really slow, then we'll have trouble crawling everything because we can't get to all of the pages. So that's kind of from a technical point of view. The crawling side, on a rendering side, we also try to have some reasonable timeout where if a page doesn't render within, I don't know what number it is, but within a reasonable time because all of the embedded content is just so slow or it uses a lot of third-party APIs, then that's something where rendering might fail. And we'll kind of revert back to the HTML version of the page. And if you have a static HTML page, that's less of a problem. But if you're building on a JavaScript framework and you rely on Googlebot being able to render the pages and Googlebot times out while trying to render your pages, then you're going to, I don't know, have a bad day, at least. What if I have a logo in the footer that's from a seal of something that's not loading from another server? Does that impact the whole page or how you see rendering of the whole page, even if it doesn't affect the actual main content? That's something you can try out. If it blocks rendering, then that could have an effect. If it doesn't block rendering, then that's fine. Like, you can just edit the source code and point it at a, I don't know, broken host name or something and see if the rest of the page actually loads. So for images, I assume, in most cases, that's no problem. But if a JavaScript file fails to load, if your jQuery file fails to load, then the whole thing is going to fall apart potentially. Maybe not. All right, let me run through a bunch of the questions. I'll try to go really quick so that we can cover some ground here. And if you have more comments to the answers or to the questions, feel free to jump on in. Otherwise, we'll open up questions for everyone a little bit further. Our eCommerce site has Panda issues. We have about 3,000 pages to go through and improve. Will we see improvements only after fixing all of these pages or after a part of them? So from our point of view, the Panda algorithm does look at the site overall. But if you're cleaning things up incrementally, that's perfectly fine. It's not the case that Panda or most of our algorithms will just say yes or no, this is good or bad. But rather, there's a sliding scale. So if you improve a lot of stuff, then that will be reflected over time as we reprocess that. I want to migrate a really big website with thousands of subdomains to HTTPS. Should I do them all at once or individually? From my point of view, you can do them all at once. I don't see from a search point of view any difference incrementally or doing that all at once. From a practical point of view, you probably want to test your setup and try out a few subdomains, maybe or a few URLs or part of your website first to see if it actually works. So I believe The Guardian just recently moved to HTTPS and they did a really nice blog post on what they watched out for. I take a look at what they did because they also have a lot of pages and they managed to migrate that. More and more websites are using AngularJS. What can we do to make sure that we don't fail when it comes to search? So there are a few things. I did a video in October for Angular Connect. I would go and watch that. That's, I think, around modern websites and search. But that's kind of focusing on AngularJS. That's one thing I would definitely go through and look at those common mistakes and make sure that you have that aligned. Another thing that we recently did is a blog post on progressive web apps and search, how to make them indexable. And that essentially covers similar problems. So I would go through that blog post as well and make sure that you have all of that covered. And there are some testing tools that are mentioned there that let you kind of double check that you're doing things right. They're also on AngularJS should work fine when it comes to search. No, I was saying there's also an amazing video for progressive web app if anyone wants to watch. It's for developers, but I think it's. Yeah, I think progressive web apps has a lot of potential. So I'm sure we'll hear a lot more of that over time. Is a photography website without a lot of textual content automatically seen as low quality website? No, photography website without a lot of textual content can be perfectly fine, can be high quality website. That's perfectly fine. The main thing I would probably look at here is if you're trying to rank an image search for those photos and you don't have those photos described in any way on your website, then that's going to be really hard. Then we're probably going to show other websites that are embedding the same photos instead of your website. If the landing pages for those individual photos is essentially DSC and some random number on top rather than a description or some clean title or caption or anything for those photos. So that's one thing to keep in mind. If you're a photographer and you don't want to rank for your individual photos, then that's perfectly fine. You'll still have content on your website saying, hi, I'm a photographer in this location and I do these kind of photo shoots. And here are some examples, then that's what you'll be able to rank for. In Webmaster Academy, there is a part that says if you have high quality content, people want to revisit your site or link to it. Do you look at revisits only from queries that contain site reference? Or can you see when users revisit my site, we are generic queries over time as well. From my point of view, I don't think we use that at all. I think this part in Webmaster Academy is more a general thing where if you have a high quality website, then of course people are going to recommend it to other people. And it's not that we're going to try to figure out what people are actually doing, but we'll try to figure out indirectly which websites are high quality and use that. Can you give some tips as to what the main things are that Google specifically looks for when deciding what keywords to rank a page for? And what advice can you give us with things to do if we wanted to rank for a particular word or phrase? That's kind of hard to do. But one thing I see a lot of websites mess up, especially smaller businesses, is you need to actually mention what you want to rank for. So if you're a small business and you're focusing on one specific type of product, then you need to mention that product and be clear about what your product does. So that's one thing that's kind of obvious if you look at it from that point of view. But as a small business here, sometimes like so ingrained on trying to bring your image across and you have a website, I don't know, maybe selling your software or selling some products that you provide. And it has lots of really nice photos on it, but you don't really explain what it is that makes your business special and what it is that you want to be found for. So that's something that I definitely recommend doing, taking a step back, having someone who doesn't know your business, doesn't know your website. Look at your website and say, does this really kind of, can I understand what this business is about by reading the text on this page? Would I be able to know what type of people to send to this website? I know index content on my site and other people keep linking to it. Do these links make my site rank higher overall, even though they point to a resource that's no indexed? Yes, I mean, if it's no index, then we will still forward the page rank from links going to that page to other links that you're linking to within that page, which is usually your navigation, the rest of your site, categories, whatever you have there. So that's something we'll definitely forward there. The one thing I would kind of think about in a situation like this is if you have a lot of people linking to your content and you're trying to say, I don't want this content to be found, then somewhere there's a kind of a mismatch there. Is this something you want people to go to? And if so, then make it available to everyone to find. If it's something you don't want people to go to, then maybe it makes sense to actually take that content down instead of just no indexing it on your website. Do you check the amount of good content on a page and originality of the content, like if this content can be found on any other website? Yes, to some extent, we do that. We try to recognize what's unique on a page, and we try to highlight that in the search results when someone is searching for that. So that's something we try to do. We also try to recognize which parts of the page are essentially duplicated across a bunch of other pages. And from our point of view, that's perfectly fine. But we'll probably look at that and say, well, maybe this is not the primary information on this page that we need to show this page for. So we do try to recognize that automatically. You don't need to do anything like use a block quote, HTML element for that. We try to pick that up automatically. When a site has great links from other great sites, does it mean that my site is higher quality because of that? No, not necessarily. So we do take links into account in our crawling indexing and ranking algorithms. But just because there are links there doesn't necessarily mean that we will rank the site above everything else. So we take a lot of factors into account when trying to figure out what is relevant and what is not relevant. Does Google expect us to switch to start using AMP? Much like you encourage mobile friendly and HTTPS, will AMP become a ranking factor? So far, I'm not aware of any plans to make AMP a ranking factor. I think it's fine to keep it like it is in the sense that this is something that you provide for your users that makes your site a lot faster. And you have a lot of value out of that automatically because users are able to read more content, get your content faster, and actually do something within your website, within your business, much faster than they otherwise would be able to. Can I help you with that? Sure. Why are you highlighting AMP in blue? Because that kind of takes away the attention from everything else in mobile. Because you're a little bit of a little bit. Which color should we use? I think transparency, like just kind of more. Yeah, I think the main reason we added that bar there is because people were confused about what this AMP actually means. Do they have to watch out for that? What's the reason behind that? So we wanted to provide some background information on what this actually means. I suspect over time, when people get used to it, that will disappear again because people will know what that actually means. So that's something that I think from a UI point of view, it could be argued which color layout should be the right one. Knowing the people at Google, I'm sure they tried a lot of different colors and a lot of different styles to see which one works best, which one brings a message across best. But I suspect that will change over time. OK. What would you say is the best type of links that help with rankings nowadays, not trying to spam or manipulate anything? So I guess the best type of links is a natural link, which sounds like you already know that. So essentially anything that comes from a national recommendation from a user who likes your content, who likes what you have there, and says, hey, this website here does something really great. I want to recommend it to my friends. So that's the kind of link that our systems essentially try to look for. We have a question about articles on our blog pages and our e-commerce site. Does Google give more weight and relevance to internal links within an article as opposed to internal links at the end of the article? No, I'm not aware of any difference there. I suspect the anchor text might be different depending on where the link actually is. But if it's like on the top of the article in the middle of the article or on the bottom of the article, essentially doesn't change anything for us. I was wondering whether high ranking of multiple pages of a single domain benefits a domain authority itself. For example, there are two websites. Website A has 10 pages, all ranking number four. Website B has 10 pages, all ranking number one. Does website B look more favorable in Google's eyes? Does website B get a boost for consistently publishing quality content that Google can index high in the search results? So I guess first of all, we don't have domain authority. That's a third party metric created by some other tools. I think it's interesting to look at, but that's not what we would use for search. With regards to whether it makes, I don't know, it's useful for a website to consistently be ranking high. I imagine if your website is the one that's consistently ranking high, then you're happy with that. That's kind of hard to say whether there is additional value out of that. In general, especially when it comes to new content on a website, we do try to fall back on a general understanding of this website before we're able to really figure out where does this content fit in. So if a website has consistently been giving us really high quality, great content, then if they give us something new, we'll probably look at that and say, oh, this is probably similar to the rest of their content, which was actually pretty good. So maybe we should take that into account. Whereas if a website that we don't know anything about gives us a new page, then we're kind of stuck. We don't really know what to actually do with that. How does a website get into the Google Sandbox and how does it get out? So we don't have a Google Sandbox. We have a bunch of algorithms that might look similar, in a sense that we, like I said before, need to understand what this page is about first. But it's not the case that we have a Google Sandbox or anything like that. So it's not something that you fall into or something that you kind of have to dig your way out or that you have to artificially wait 30 days. That's essentially just something where we need to figure out what your pages are about and where we can show them relevantly. And we'll try to do that as quickly as possible. Sorry, can I just ask a quick question on that one? If you don't mind. So I'm working on a project at the moment and it's quite a big dynamic kind of site that quite a lot of it is driven by development and various related bits that are near there, all relevant. What I've noticed is that from time to time, when we do certain things, it seems that almost like Google's going off having a look at it and putting it to one side, checking everything's fine. And then rankings drop a little bit and then it's almost like it has been looked at or that's fine and then all activity resumes. Does that ever happen? So if you have say a big site where you're doing quite a lot of development dynamically and it could potentially generate quite a lot of pages, but obviously they're all relevant to say, I don't know, services and local list and the other. Would it sort of trip some sort of filter that maybe, well, actually there's loads of URLs just appeared on this website. We need to just have a look at that and make sure that they're not spamming and sort of just churning out URLs like crazy for no reason other than to create maybe doorway pages. So kind of like a sandbox, but like a temporary one that just trips a filter and it needs to be just checked in case it's doing something dodgy, which is not. I don't think we'd have anything like that. I mean, there are definitely like technical aspects where if we find a lot of new URLs then we have to crawl them and understand them and all of that takes a bit of time. But I don't think we'd have anything where you have a normal existing website and suddenly you have 1,000 pages more and therefore because you have 1,000 pages more we will see the rest of your website as being lower quality. I don't think that would make sense. I think that's something where we try to understand the bigger picture first. So there's no kind of manual, you hit a certain threshold and then all of a sudden some engineer goes, hey, they're gonna break the internet. I'm just gonna have a look at that type thing. I can't see us having any manual kind of threshold there. I think our engineers would be too busy to handle all of that. I mean, what sometimes happens from a manual point of view is if we see a website significantly breaking then sometimes they'll flag us and say, hey, John, you need to tell this webmaster they're doing something really bad and that they should fix it as quickly as possible. But if things are normal and we just see a lot of new pages then our system should be able to handle that. That's kind of what we built them for why we made them automated. Okay, thank you, thanks. All right, let me run through a bunch more questions and it looks like there are already big discussions in the chat so I'm sure I'll hear from some of you. Is structured data going to be a ranking factor? Not that I know of. For a large part, this is really just something that we use for rich snippets. How to stop Google from using dates with an article as the published date if none is provided? That's tricky because we sometimes try to figure out what the date might be and we'll try to show that in the search results. So one thing that you could do is, of course, provide a date, make a clear published date on the pages themselves so that we can pick that up. But even in cases where you do have that and there are other dates in your article then sometimes our systems might be confused. Sometimes it's useful to just send these to us as examples and I do pass them on to the team that works on recognizing dates and flagging them in the search results but it's not something where we'd have like a manual list of URLs and like someone goes in a database and says, oh, they meant this other date. Therefore they will like manually change that date in a database. That's not the case. So this is something where your feedback is valuable but we use that to improve our algorithms over time and not to manually adjust things for that one specific article. When I post a new article on my website it doesn't rank for a really long time even though it's indexed sometimes months and eventually it just pops up where I would expect it to be. This happens over and over again. Is this some kind of a penalty? Is this a trend or what could be happening here? This is something kind of, I guess similar to what Don mentioned just before in the sense that sometimes there are technical things that happen on our side where we have to understand the content first and that can take a bit of time but that's not something where we'd have any kind of a manual action where someone from the website team would be saying, hey, this website shouldn't have any new content and if it does have new content then I want to review it manually because that's not really how Google systems operate, how Google search works. We try to automate things as much as possible. So probably I would guess in a case like this either there's a technical issue where we have to actually understand this content first or your website is kind of seen from our systems as being kind of borderline from a quality point of view where maybe we wouldn't like pick up the new content as quickly as you'd expect just because we're not sure of how we would actually want to rank this in the search results. How much is on auto mode? How much is on auto mode right now? Oh, pretty much everything. I mean, our search results are meant to be automated. There's no possible way for us to do like the search results manually. We have a web spam team, but they're in limited number of people. There's like no chance that they could review the whole web. I did a presentation, I think a couple of weeks back for some kids and the current number we have on how search works is 130 trillion pages. And if you printed all of those pages out you would have I think over 30 piles of paper all the way to the moon. So it's like that's the quantity of content and it's growing and growing all the time. There's no way that we could do that manually. So manually will be like sites, manually will only be sites that are very, I guess like serious issues that you guys would look at like the spam team would look at, right? Yeah, I mean, if they're spam issues then we would look at the spam issues and when we would take manual action and we would show that in the search console. So it's not something where we would take manual action and just like don't tell anyone about it. It would be visible in search console and they could do the reconsideration request and we would be able to kind of pick that up again. Okay, thanks. John. Yes. Do you know, we were talking about you just mentioned about lower quality sites, et cetera. Yeah, that maybe do a lot of dynamic regenerate stuff which is kind of, it's not the same, but it's often matches similar queries, yeah, et cetera with the lowest search volume. Yeah. So that probably is classed as the lower end, but not as important as to say fresh news that's obviously really popular. So with that kind of content maybe be put to one side even though it matches queries, very low search volume, but quite a lot of them. That might be put to one side and think, oh, well, we've already got something in there. So even if it changes a lot. I don't think we treat that separately. It's a lot priority, yeah. I don't think we'd look at it like that. So it's more a case if we're not sure about the quality of a website overall, then that might be something where algorithms say, well, I got some new content here, but I'm not sure how relevant this is and where we should be putting it. Maybe we should get more information about this content before we start showing it highly in the search results. So that's kind of from that point of view, but it's not the case that we would look at like the search, the query volume and say, oh, this is like nobody is searching for this. Therefore, we're not going to index this page. That's about this topic because that you kind of create that vicious cycle from through the algorithms that way. It's like if nobody is searching for it, then even if you had content and even if someone were to search for it, then they wouldn't find any content. So of course, nobody is going to search for it. So you kind of want to have that content index so that when someone does search for it, we could show it. So that's less a matter of how many people are searching and how long tail it is, that kind of question. OK, OK, thanks. All right. We created an article manually submitted it to Google Index. It has unique content. The page was submitted and we were able to find it in search, but less than 24 hours later, it was completely disappeared from Google Index. How is that possible? So these things can happen. We don't guarantee indexing. Sometimes we pick things up quickly. Sometimes they drop out again over time. That can be completely normal. That's something that settles down over time as we're able to kind of understand where to place this URL in our index. Almost a week ago, we had to 301 redirect some pages to new locations. There were 10 pages, mostly ranking number one through number three. A week later, three pages have maintained their positions and the rest of pages dropped to three positions and all images disappeared. What happened? So any time you do a site move, you might see fluctuations like this, where it does take a bit of time for everything to settle down. In particular, when it comes to images, it's important that you make sure that you redirect both the images themselves, the image files, and the landing pages so that we're really able to kind of migrate this connection between the image and the landing page directly. So that's one thing to watch out for. The other thing to keep in mind is that images usually update a lot slower in our index than web pages, because for the most part, an image URL doesn't change. It's not something that constantly has like a new variation of the image under the same URL. So we tend not to crawl them as frequently. So that's something where, especially if you have an image-heavy site and you do a site migration, then that will probably take longer in image search to settle down than just the web search side. Any news regarding voice search traffic being included in search analytics? At the moment, I don't have any news on that. From an SEO point of view, which would be better to use our own subdomain as a CDN, or is it the same if we use like cloudfront.net, which is better? From a search point of view, purely a search point of view, you can use either one, either like a hosted subdomain, you might do on Blogger, for example, or on WordPress, or use your own domain. From a practical point of view, I think it always makes sense to go to your own domain just so that you have everything under your own control and you can do whatever you want there. So if you want to move to a new platform, you're free and able to do that. Whereas if you're tied to an existing platform through your domain name, then you're kind of tied in. So that's something I would keep in mind, especially if you're planning on a longer-term search presence. A friend of mine has a client that has a lot of large websites with a lot of products. These products aren't sold directly on the site. He sends them to e-commerce stores from his site. What would be the correct schema markup for products on his site since he doesn't sell them on his site? I don't actually know, so I don't actually have a great answer for you there. I suspect you would probably still use the product markup there just to let us know about that. However, probably we wouldn't be doing much with that markup in search. So that's always something to kind of keep in mind. You might use the theoretically correct markup on a page, but that doesn't necessarily mean that we'll show it in search or that it'll have any effect in search. So if you're stuck between implementing a specific type of schema that org markup on a page or making bigger changes that are actually visible to users and to search, then you might want to keep in mind which one of these is actually visible and which one might not have any effect at all. This has been puzzling me for a couple of months. Since mid-September, Google started displaying our co-UK URLs despite the fact that we migrated to .com in 2015. Now, half of our traffic is going to these co-UK URLs. What's up with that? So I saw a few cases like this recently, I think, about the same timeframe, mid-September, October, and that's something where from discussions I've had with the team, we basically made some changes with the way that we show URLs in search for geo-specific queries. And some of these sites might be seen changes like this. From my point of view, I don't find it really perfect in the sense that if you're redirecting and you've been redirecting for a long time, then probably we should just take that into account and show the URL that you're actually redirecting to. But that's something where there's still some ongoing discussions. So any feedback that you have specifically with regards to like CCTLDs that you've been redirecting is really welcome. That's a real mess at the minute, John. One of my personal projects is kind of a lead generation site for trade to people, which I've had for years. And obviously, I monitor things like, I don't know, electricians in X area. And in the UK, there's absolutely loads of sites that turn up from Australia, any other country that's got English language, New Zealand. So people, presumably, are like looking for somebody's looking for an electrician in a small town in England. The chances are that they don't want to see Australian and New Zealand sites and American sites turn up when they're looking in port at UK. So it's a bit of a nightmare, really. That sounds like something different, though. So I think in this particular case, they have a .com. And they also have a .co.uk. And they're redirecting the .co.uk to .com. And we're showing the .co.uk in search. So you're saying it's kind of the opposite of way around that you search for something local and it shows something completely unrelated, non-local. I mean, it might cost quite a bit. You're going to get an electrician from Australia to rock up to Manchester. So I think it's because there are town names that are the same in different countries. But it doesn't seem to be any, despite a lot of the pages having things like Google Maps on, which somehow should at least provide some indication of where they're from, doesn't seem to have any logic to it whatsoever. Are you checking in the normal search results or using some kind of ranking tool? No, normal manual. OK. No. But this is something I would love to have examples for. So if you're noticing that, regardless if it's with your own sites or not, and you're really seeing like someone is searching for something that should be local in the UK and we show someone from Australia or New Zealand, then that's something I'd love to bring to the team to have them take a look at. Yeah, I'd be glad to. Jonah, I'll send you some as well, because we see this all the time. OK, great. That's good. That'll be useful. Now, I mean, this is something that seems to come and go in waves. And sometimes we get things really well there. And sometimes it kind of heads off in that direction again. Oops, there was a question about IPv6 as well. Like, should I move to IPv6 or provide IPv6? From search point of view, we do try to crawl with IPv6 if we notice that that's available. But it's not the case that we would treat these sites in any way differently if you have IPv6 available for your site or not. So that's not something I would see as being critical. Obviously, more and more users are going to IPv6. So that's something where you'll probably see traffic growing over time. But it's not that we would see that as a ranking factor. Like, is your site available on IPv6 or not? Because even users on IPv6, they access normal IPv4 sites all the time. Big bunch of questions together. Does using Cloudflare affect my ranking and indexing? No. That wouldn't. One thing that you might see in effect if you switch CDNs or to a CDN from not having a CDN is that we'll probably crawl a little bit more conservatively for a while until we figured out that, like, what speed we can actually crawl without causing problems on your server. So the crawling side is something you might see in effect there. Let's see. Kind of goes on in different variations. We have user-generated content in users' preferred languages. We have the Chrome navigational text translator to other languages. What do you recommend doing there? This is something where probably you could use the hreflang tag to let us know about the different language variations that you want to have shown to people. So what will probably happen is we'll get a little bit confused if the main part of your page is in Spanish, and you say, well, actually, this is the English version of the page. But for the most part, if someone is searching in English, then we'll just guide them to that page, even if the kind of UGC on that page is in Spanish. John, we're a little clueless. Sorry, I've not been along for a few weeks, so I've started a few things. So on that CDN subject, we used to get a crawl rate of around 50 to 60,000 URLs a day, something like that. And it's a dynamically-generated kind of content and stuff, and lots of things change on a lot of the pages. Obviously, based on jobs posted, location, other reviews, all that kind of carry on. I moved that onto a CDN, Flowerflare. The crawl dropped through the floor, and I've never recovered it. I mean, the rankings didn't get massively impacted. But the crawl has never, ever returned, even though I took it off that CDN, because I could just see it was just, it was almost like, for me, I kind of have got the belief that there's a queuing system in place for URLs. And for me, I feel that queue just got so big, because so much changes on that website that it just was a disaster for dynamically-generated content at CDN. You know what I mean? It's like, it's hard to say without looking at the site. I mean, it's really, when you kind of say dynamically generated, I don't really know what is happening there. If you have a lot of URL parameters, then that could be something that our systems pick up on and say, well, we don't really need to crawl that much. We can still pick up all of the content without calling all of these unneeded parameters. So that might be an aspect that comes into play there. But in the Search Console Help Center, there's a link to Report a Problem with Googlebot, which is a pretty complicated form that you can fill out. And that goes directly to the Googlebot engineers who can take a look at that and see, are we crawling properly or not? Or is there something that we need to tell you about your site? For the most part, they won't respond. But if you're saying, oh, I just moved to a new CDN and crawling has essentially stopped, then they can take a look at that and say, OK, this is actually a pretty stable CDN. We can crawl a lot faster than we've been doing in the past. OK. It just never seemed to come back after that in terms of, but it didn't affect rankings. So that's kind of a good thing in that case. Yeah, yeah, yeah. You don't need to artificially increase the crawl rate if we can pick up the new and changed content anyway. Worrying the small. Worrying the small, that's the thing. Maybe it's just exception of the site, because you said the other week that the crawl rate isn't good, but it might, but it's just seemed to be a coincidence that it was after that, that was all. But anyway, no worries, I'll show up now. All right, can I ask a question before we run out of time? Of course. Go for it. OK, thank you. Well, I have some pages, not many, but they need to be updated regularly because they show information about the tools that get updated over time and there are new features they are being compared in the same page. The difficulty that I'm having is to somehow show that in the results, the date that appears there is the date of the last update, not the creation date. So I've used a schema article with the date modified and date published, but it keeps always showing the date published date on the results. What is the correct way to hint Google to show the last updated version? And I can guarantee I'm not being spamming, I just updated the articles when there is really something new not trying to push changes. You can't. So there's no direct way to say this is the date that I want you to show in the search results. So that's something we've been talking with the team that works on dates here on and off about that as well. Like maybe we could just use structured data markup and tell people which markup to use and they'll do it right. And for the most part, they prefer to handle that algorithmically. And sometimes that means for some pages they pick up the last modification date. Sometimes they pick up the original creation date and they try to figure out where it makes sense to show which date. So this is something where you can't, on your side, really control this. But if you have examples, then feel free to send them to me. And I can bring them up with the dates team and we can look at that and see, could we be doing that differently? Or maybe we're doing that when we look at it overall. We're still doing it in the right way and you just need to get used to that. I don't know. But it's something where there's no absolute markup that you can just put on your pages and say, this is the date. OK. OK, I can send you an example. I'm just wondering if it is not possible to somehow hint to Google to show the last update. Would it be better to remove all the dates from the page at all in that case? I don't think it would change anything for ranking. It wouldn't make it. Not for ranking. Just for the people that go to the page, the search results do not think that the page is whole because in reality it is being updated frequently. It's like Wikipedia. They get updated, pages updated every frequently. And in reality, Wikipedia pages do not show dates in search results. So would it be better to remove those? I would totally leave that up to you. That sounds like something between you and your users. Do I want this date to be shown in search or not? And that seems like something you could even do an A-B test on and say, I will take a visible section of my site and say, I will do it like this way and the other one will do it the other way and see do users react differently? Do they even notice? But from a search point of view, we recognize when a page was originally created even without you having the date on there because we've just been paying attention as we recrawl your website anyway. So it's not that we need that date on there. But for users, maybe it makes sense. Maybe like you said, it doesn't make sense. Yeah, it's not my users. It's the Google users that will look at the results and see, oh, this page is whole. And in reality, it's updated frequently. But OK, I'll send you an example and you'll figure it out if there is any hope to make it look right. Can you ask another question regarding indexing videos that are on pages that they are published on YouTube because they are Hangouts? And I have this monthly Hangouts that I have pages on a blog that show the show notes, transcriptions, all the details. But the YouTube page doesn't have those details. So is there any correct way to make those pages appear on video search on Google instead of the YouTube videos? I don't know. So offhand, one thing that might make sense or I don't know if it might make sense is to try to see if making the YouTube video unlisted would work in a case like that. But it doesn't do what I do. OK, I mean, in general, these are two pages and they both have the video on them. And we try to balance which one of these is the right one to show in the search results. And that's. YouTube one has no description. The page on the site has all the details, all the topics, all the show notes, all the transcriptions. And Google still doesn't like my site. I don't know. That's like you're fighting with yourself. Exactly. A question is, shall I stop using YouTube to publish this video? No, I mean at least that ranks. So that's something where you might want to look into it and say, well, why don't you just put the stuff into the YouTube description? Or make it more obvious like where the actual content page itself is and put that link to your content in the YouTube description and make sure that people can find that connection. But some people want to go to YouTube to watch videos. They don't want to go to a random web page. So that's something where I wouldn't say we're always doing it wrong because we're pointing them to the YouTube page instead of to your page. It's the same content, but people like to consume it in different ways. OK, I can send you some links so you can figure out if I'm doing something wrong. Sure. Sure. John, won't that always show YouTube? This is the issue we had before when we moved to Wistia. And whether or not it's on the contents pulled through to his site, it's on YouTube. It's not on his site. So it will always rank YouTube. Well, I mean, for video, what we'll recognize is that the video is embedded on the page. And we can show that as a landing page for video as well. So I mean, essentially, you have the same content and two different landing pages. And YouTube is something that a lot of people know that we know has a really good user experience. So sometimes we do show that. OK. I mean, hosting it yourself, it might be another option if you're saying, well, I really, really, really don't want people to go to YouTube at all. All right, more questions. Whoops, kind of out of time. But if any of you have any last minute questions, feel free to jump. I'll ask a related video question then. We had, I don't know if you remember, but a couple of months ago we had an issue where the videos just weren't being indexed. And then we moved to Wistia, and they still weren't being indexed for six months. But then overnight, 70 out of 70 were being indexed. And suddenly when, bizarrely, when the US site started coming back in the middle of October, all of our UK videos, which are not on the same site and not related, started being de-indexed. And then we're now back down to six out of 70. We've gone right back to square one. So we have 70 of our own videos over the last six weeks or so. It's no longer going to index those videos at all, weirdly. I don't know. I'd have to look at those to see what's happening. I know our algorithms try to figure out if the video landing page is really a video landing page or if it's just a random page that happens to have a video somewhere embedded. And that's kind of what we would look for to understand if it's a video landing page or not. So it's all about the video, you mean? Yeah. Like, if you have the video on a site or in this thumbnail, essentially, then our systems might look at that and say, well, there's a YouTube video embedded here or video in general, but it's really not like a primary element on the page. Would that not be an all-or-nothing situation then? I mean, they're all products, all with the same kind of layout. So or will it end up being nothing if it catches up? There's something not hidden there. I'd have to dig up that old discussion that we had with the video team there. I think specifically to do that to your site. Sorry? I'll give you something to do later if you get bored. Well, yeah. I'm always looking for more emails. I would just keep an eye on it then. Yeah. Quick question regarding those dates. We tested with a few websites, and we noticed that if we do change the published date for an article, it does get picked up in the snippets. So wouldn't that be an easy workaround around the algorithms deciding whether the page was updated sufficiently or not? What did you change? The published date. So instead of just letting Wordpress change the modified date, we also changed the published date to the new date. And it did get picked up in snippets. I mean, that's something you could try out as well. But I think this is mostly something that, from my point of view, the snippet is more like marketing for your pages. And if you want to have that date in there or a different one, that's kind of up to you. And that's something where our algorithms try to do this work for you so you don't have to think about this. But sometimes that's not the way that you wanted to have it done. So doing a workaround, like you said, might be an option. Finding a different way to kind of place that date around the page might be an option. I tried to think of it. Does the sitemap last-mod tag influence in any way, or is it an extra signal for you? I don't think so. I don't know, for sure. For the most part, we used the last modification date to recognize when a page has changed so that we can recrawl it. And a change on an HTML page can be anything from fixing a typo to a new layout that doesn't necessarily mean the article is new. It's just, well, the HTML chain. John, what is the best way to send you these sample pages these days? It seems your Google Plus profile does not have any more the way to send you a message over Google Plus. Is that correct? I think that's just the way Google Plus has been evolving over time. So it's not. It's not that you're deliberately preventing us to contact you over Google Plus. Yeah. So you can just add me, and I think you can make it private, or take out the all. And then it's just all right. John, can I ask one tiny, well, one really quick question? OK. So when you get four 10s gone, yeah, and they're never really gone, nothing's ever gone, ever. Really, is it ever? They keep getting re-crawled. We say this from time to time when Googlebot was a bit mad. Each time Googlebot gets there, because of four 10, will the time be extended all until it comes back again? So eventually it just kind of just says, well, yeah, then we checked, gone, let's just make that a little bit longer next time, longer next time, longer next time, until it just drops away entirely. I wonder if there's anything like programmatically built in there, so it just keeps dropping away, dropping away, dropping away. I don't think so, at least not that I'm aware of. We basically treat it either it's something we want to check or it's something we think is not really needed to be checked that often. And aside from links to any priorities to those being checked, IE, is there something like impressions, or if people were actually looking for that content originally, or used to get traffic, et cetera? Because I know we have descend and order in four 10s, but presumably some of that is based off past links, et cetera. What other activities does that descend a lot of? I mean, so in general, if there's content there, then we'll try to pick it up as quickly as we can based on lots of factors to understand how quickly we have to recrawl. But if it's a 404 or 410, I have no idea what would be involved with the crawl rate of pages that we assume don't exist. If we find new links to those pages, we'll probably try to recrawl those again. But if we think they haven't changed and we think they're still gone, then I have no idea what effect, like, how much we would crawl there. Would there be any point in trying to recreate content on some of those if Google keeps coming back looking for 10s? No, I think it's mostly just we're trying to make sure that we're not missing anything. So I don't think you'd have any magical advantage of putting something up on an old URL. OK, thank you. Thanks. All right, so with that, let's take a break here. The next hangout is set up for Thursday in German and Friday in English. So maybe we'll see some of you then again. Timezone-wise, it's set up for the other kind of side of the world. So maybe it'll work better for those of you in Australia or New Zealand or Japan or India. Check it out. All right, thanks again for joining, everyone. Thanks for all of your questions and hope to see you in one of the future hangouts. Bye, everyone. Bye.