 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hours Hangouts, together with webmasters, SEOs, publishers, like the ones here in the Hangout, as well as lots of people who submitted questions on the Google Plus page already. As always, if any of you want to get started with the first question, feel free to jump on in now. I can do one. It's related to mobile. So you already spoke before that if you hide the text or a box or something, it's fine for mobile, right? But my question is, what about links? Do they still transmit page rank in hidden links on mobile? Sure. If the link is there on mobile, if it's in the HTML, we'll still count out. If the link is not in the HTML at all and only in the desktop template, essentially, then that, of course, wouldn't be seen with the mobile first indexing. Cool, perfect. Hi, John, I've got a quick question. In regards to sites with similar domain names, so for example, we'd be called blue widgets.com, and then other sites start popping up, the blue widgets.com or my blue widgets.com. It's probably an easy answer, but do they kind of have any impact on our domain if they're very similar in domain name, especially when they're link building? Sometimes you've seen they link build to their own site, but also include links to our site to kind of get that similar kind of relevance. No, if they just look similar, we still see them as separate URLs so that there wouldn't be any kind of algorithmic overlap that our algorithms would say, well, the sounds similar, maybe they mean the other one. Either it's the correct domain name or it's not the correct domain. Yeah, and there's nothing to worry about with them link building to both themselves and us within the same article. Nothing there now. No, no. Yeah, OK, thank you. Cool. All right, seems a bit of background noise somewhere. Let's run through some of the submitted questions. And as always, if there's anything on your mind in between, feel free to jump on in. And I'll try to make some time towards the end as well for any additional things that come up from your side along the way. All right, the first question here is kind of related to Search Console. I noticed in the crawl error section there's a massive spike of 500 errors from a couple hundred to a couple of thousand. This anomaly occurred in a day before decreasing back to the typical range within a couple of days. In the new Search Console in the performance report, it doesn't show that spike. So which of the Search Console versions should I trust? Essentially, these are different reports. So looking at the performance reports in the new version of Search Console is essentially like looking at the Search Analytics section within the old Search Console, which shows you how your site was performing during that time. That means performing in Search. And the crawl error section shows what happened when we tried to crawl your site during that time. So not all crawl errors lead to URLs dropping from search results. Not all crawl errors lead to changes in ranking of the website or of individual pages. A lot of times we'll take a couple of these crawl errors into account and say, well, this looks like a temporary issue. We'll retry it a couple of days later. And if we retry it by then and it looks OK, then essentially nothing happens in the index. We miss maybe a couple of days of getting updated content from your site. But it's not that we throw your old URLs out the first time we see any kind of issue. So in particular, with 500 errors, also partially with 404 errors, where when we've seen an error once and we think, well, this page used to be really important, maybe it's still OK if we retry it again, then we'll keep the old index state. And after a couple of days, if we see that the page is actually crawlable again, we'll update the index status again. If we see that this error state is kind of a steady state that remains forever, then we'll drop that page out of our index and say, well, probably those pages are not working for users either. So we'll remove them from search. But essentially these are two sides of the whole kind of indexing and ranking puzzle. One is kind of the initial step of Googlebot looking at pages, then the indexing report is kind of like what we've kept from those pages. And the performance report is what happens with those index pages. And not all crawl errors that happen in the first part of our pipeline actually are reflected in the ranking side of our pipeline. We're using H2H3 tags to break up content on landing pages, influence rankings in any way as opposed to just having all content under an H2 tag. What are your thoughts? So in general, I think it makes sense to use semantic markup, so different heading levels to better break up your content and make it a little bit easier to understand. Sometimes this helps search engines to better understand which pieces of text belong together. Sometimes it also helps users to understand this a little bit better. For example, if they're using a screen reader, then it might be a little bit more obvious which parts belong to the same section. So from that point of view, I'd recommend keeping on using those headings. I don't expect to see a big change in rankings with pages like this where you have different headings on a page, but it does help us a little bit to kind of understand things a little bit better. So if you haven't been using these headings properly, don't panic. If you have been using them properly, I'd definitely keep them there. It's something kind of like a really small and soft factor when it comes to understanding pages a little bit better. We have some branch pages which offer different services. So we've created individual landing pages for some of those services and linked them to the same, linked them from the branch page. Is this sufficient, or should we be linking back from the landing page to the branch page so that the internal link goes both ways? I would focus on this more from a user point of view instead of an SEO point of view. If we can crawl those pages and they're linked in a way that give us context, then that seems like a good thing. Depending on how you see your users interacting with those pages, that's kind of where I would step in and think about where does the user want to go from going to this page? Do they end up converting directly? Do they just want our phone number so that they can contact us? All of these things are essentially micro conversions that help you to better understand where users are going, what they're completing, and to help you guide how you create the structure of your website in a way that guides them to where you think they want to be headed. We've been adding new landing pages. How long should we expect to wait before Google will begin to rank them? Once we've indexed a page, we can rank them immediately. There is nothing artificial there that would be holding pages back. Essentially, once we've indexed them, we can rank them. Obviously, over time, as we collect more signals about these pages to understand where they belong within the context of your website and of the rest of the web, the better we'll be able to show them in the search results through the right group of users. But essentially, once a page is indexed, we can show it in the search results. All right, a bunch of questions all grouped together. Generally speaking, it's a lot easier to separate these questions out into individual ones. That way, it's easier to go through and catch more of these questions rather than to have one giant question that's hard to parse. Question about the Google Sandbox. I've been seeing folks suggest there's still a Google Sandbox. I think we've talked about this a bunch of times before. I don't think there's anything really more to add to what we've been talking about this for a really long time now. Nothing really has changed there. Let's see, question goes on. Ranked brain, what is ranked brain? Ranked brain is primarily a way for us to understand the query that is sent by the user. In particular, there are lots of queries that we see for the first time every day. I think it's something like 15% of all queries we see the first time. And it's important for us to understand what the user is trying to look for. And this is one of the ways that we use machine learning to try to figure out what this user is actually looking for, what kind of content might be relevant for them, and to present that in the search results. And then it includes speed. So is Google looking at timings or specific improvements or both? We use a variety of factors when it comes to speed, including calculated metrics, as well as live metrics that we've seen from users. So I would not focus just purely on one single number and try to optimize for that. Instead, I'd use the tools to understand where your pages are slow and to kind of focus on those issues. Question about URL migrations, what would be the impact if AB test is running while a large-scale URL migration launches? All old URLs, properly 301 redirect to new URLs, but Googlebot is sometimes redirected to test URLs while the migration goes live, even if those test URLs are properly canonicalized to the core ones. Could this impact how signals are passed and potentially lengthen the timest sightsees of volatility? I don't know if this would cause big issues in the bigger picture, because usually when you're doing AB testing, that's a small group of URLs within a website. It's not that all URLs on your site are doing kind of this AB testing thing, whereas sometimes we see one URL, and sometimes we get redirected to another one. For small sets of URLs, I don't see any problem there. For bigger sets of AB tests, I would try to minimize this as much as possible. The reason being is that, especially for site migrations, when you're moving to a different domain, when you're changing within the same domain, when you're moving to HTTPS, all of these things, we try to understand what the general picture is of the website. Is the whole website moving from one domain to another? Is the whole website moving to HTTPS, for example? And the clearer we can get a picture of what is actually happening here, the clearer we can apply an algorithm to move all of the signals that we have from this website to the new version of that website. So if, during a site migration, you're doing some fancy AB testing, and sometimes we see redirects to one direction, and sometimes we see the site migration redirects, then that can certainly throw us off in the bigger picture and can delay the overall move from one version to another. So especially when it comes to site migrations, the cleaner you can set that up, the more you can make sure that really all URLs are redirecting to the new version, that you don't have crawl errors from URLs which should be available, that you don't have things blocked by robots.txt, which weren't blocked by robots.txt before, all of these things, the cleaner you can make it, the more likely you will be able to say, oh, this is obviously a move from here to here. Therefore, we don't even need to worry about all of the details. We can just shuffle everything over. Whereas if we look at it and we see, well, maybe it's a move. Maybe they're doing something fancy. Maybe we should take a bit more time to figure out exactly what is happening here. And that can end up causing a site migration to take significantly longer than it really needs to take. So that's, I think, an important thing to keep in mind. It's also something which I noticed when I recently asked on Twitter about people's experiences with site migrations. A lot of times, the responses I got were, well, usually it works. But this one time when we forgot to do this or when we forgot to do that, it went terribly wrong. That's kind of where we see the problems as well, where if you're not giving us a clean signal that you're doing this really straightforward move from one version to another, then everything just takes a lot longer for us to process. John, in terms of site migrations, we're looking at moving to a new CMS, a custom-built CMS, so away from WordPress. And also, at the same time, we're looking at redesigning the site as well. So what would you say are the main things to look out for? We're hoping to keep the same URL structure, nothing changing there. But yeah, I mean, is there going to be a period of volatility we should look out for? And what can we do yet to minimize that? Maybe, maybe. So what I would try to do in all of these kind of migration situations is get a comprehensive view of the current situation first, and then compare that afterwards as well. So that could be by using some kind of a crawler tool if you have access to something like that, where you can really see all of the URLs from your website, what their status are, what their canonicals are, what kind of internal links are on these pages, how they're interconnected, and so that you can compare that to the after state as well to make sure that all of those pages are clearly exactly the same, or as the same as you want them to be. Even when you're doing something simple, like switching to a slightly different template or switching to a different CMS, you might have things like internal linking, which is slightly different, or maybe individual categories, which are suddenly not linked at all anymore, so the URL structures themselves, they stay the same. The URL stays the same. But if the internal linking is suddenly significantly different, then we have to kind of reassess the context of the URLs within your website too. So all of these things, the clearer you can get a picture of before and after and make sure that there is nothing missing, the more likely we'll be able to just shuffle everything over. OK, thank you. Cheers. I observed that after adding more than 1,000 new pages to my site, the value of the existing content has diluted. How many new pages does it take to produce that effect? And how to speed up a no index for those 1,000 new pages? In general, adding new pages to a website should never really be a problem, because you're just adding more different types of content. However, if you're adding all of these pages and essentially targeting the same keywords, then that obviously means you're competing with yourself. And it makes it a lot harder for us to understand the context of your pages within the overall web. So that's something kind of to keep in mind. There's no limit to the number of pages you can add in a certain period of time. Some new sites add hundreds of pages every day. Sometimes a site switches on their archive that adds hundreds of thousands of pages all at once. And all of these things can be perfectly fine, and they can work really well. But it really depends on what you're trying to do with just adding 1,000 pages. And then afterwards saying, oh, I want to put a no index on all of these pages. I changed my mind. It sounds kind of, I don't know, a little bit iffy I'd be worried that just arbitrarily adding 1,000 pages and then no indexing them might point at these 1,000 pages not being such critical content for your website. And that's something probably you want to avoid anyway. With regards to processing no index for 1,000 pages, doing that with a sitemap file makes sense. Provided you can give us a clean last modification date on those URLs to tell us that these pages have changed since the last time they were processed. And of course, make sure that the sitemap file, if you have a last modification date in there, that that's kind of reasonable. So that you don't specify now as the last modification date for all pages within your website because then our systems will look at that and say, well, the whole website couldn't have changed now. And every time we see the whole web, the sitemap file again, therefore, probably we can ignore that last modification date. So kind of going back to what we talked about with migrations, the cleaner you can give us a signal that these pages have changed, the more likely we'll be able to take that into account. Will Google correctly interpret a meta-robots tag that has been implemented through JavaScript? In this instance, I'm looking to add a no-index tag to the bulk of our out-of-stock product pages. Yes, we will be able to pick up a robots no-index time that's added through JavaScript. It's kind of tricky when you're working with the no-index tag with regards to JavaScript because adding a no-index tag to a page with JavaScript is something that will work. However, the first time we crawl those pages, obviously, we don't have the JavaScript process. So we won't have the no-index tag there. So it might be that we'll index that page for a while. And then when we process the JavaScript and start seeing the no-index tag there, then we can drop that page from our index. But there is usually this kind of time frame of a couple of days, maybe a week or even longer, between us initially seeing the HTML of a page and having processed the JavaScript version of that. Once we have the JavaScript version, then we're good. And we can drop that page. The other way around doesn't work. So if you have a no-index in the static HTML and you use JavaScript to remove that no-index, then we'll see that page. Initially, the HTML, we'll see the no-index and go, oh, there's nothing to index here. We don't even need to process the JavaScript. So that's one thing to watch out for, if you're using JavaScript to switch the no-index status, you should only add the no-index with JavaScript. You shouldn't remove it, because we'll probably ignore the JavaScript if you try to remove it. Using Google UTM tracking links in URLs affect SEO link juice, in general, when we see these UTM tracking links on URLs from analytics specifically, we essentially drop them. So we ignore the UTM tracking links there, and we focus on the primary URLs. When we see direct links to URLs with UTM tracking links, then we'll treat them as normal links, and we'll try to crawl those URLs with those parameters attached. You can control this with the Parameter Handling tool in Search Console, or by using a rel canonical on a page, for example, that also helps us. In general, we notice these kind of things fairly quickly, and we just fold the URLs together, and we treat them as one. So usually, there is nothing lost if you see that other people are linking to the UTM version of a URL on your website. So I wouldn't worry about that too much. If you see this extensively, I just put it in the Parameter Handling tool and say, Google can ignore this specific parameter and should just crawl the main URLs instead. Regarding disavow files, what impact would there be for not properly uploading a disavow file during a domain name change? So essentially, you move to a new domain, and you don't upload the old disavow file. Practically speaking, what would happen there is as we re-crawl and reprocess the external pages that are linking to your site that you have in your disavow, we would add those in our systems as links again to the signals that we have for your website. So probably, what would happen is we would not use your disavow file gradually as we reprocess those external pages that were linking to your site that you wanted to have them for. And if you add the disavow file again, then gradually, those links will be dropped from our link graph internally as well. So practically speaking, as always, you should aim for a clean site migration and make sure to include all of these settings in files as well. That also goes for HTTPS migration, so make sure you upload the disavow file there. And one thing that I haven't seen people mention that much recently is with mobile-first indexing, if you have an m.domain, then since we want to have the mobile version equivalent to the desktop version, you should also upload the disavow file for the m.domain that you have if you have separate mobile URLs. If you have a responsive design, then obviously, there's nothing you need to do there. But if you have separate mobile URLs, then the disavow file also needs to be there for the case when we switch to the mobile version for mobile-first indexing. What would you do when a swarm of duplicate sites keeps linking to you the moment you disavow one site and a new site appears? Generally speaking, this sounds like kind of the old school comment spam, kind of spammy site situation where there are tons of different domains that are just linking to your website and they're just randomly linking to lots of websites. And generally speaking, our systems are pretty good at just dealing with that and ignoring that. So unless there's something very specific that you're worried about there with regards to those links, I would just ignore that. There are lots of really weird places on the web that link to all kinds of websites. And that's not something that I would really worry about. If this is something that maybe a previous SEO setup and these are a bunch of different sites that you feel might be hurting your website, then I would just put them in a disavow file, use a domain directive. It's just one extra line in disavow file, and it's really fast to do. And maybe go through these every now and then to clean that up. But in general, for most websites, you don't need to do anything in the disavow file for situations like this. Let's see. Looks like two questions in the chat as well. Google de-indexing issue, the main problem is we moved to Angular 5.2.10. And after that, our website was de-indexed for many queries. I probably need to take a look at the specifics there. In general, if you're double checking the way that the pages are rendered, which you can do with a number of tools in the meantime, like the mobile-friendly test, you can use the rich results test or the fetch and render tool in Search Console to look at the way that we can render pages. And if we're able to pick up the content there, then that should work. You can also double check in the new Search Console in the indexing report there to see why individual pages might be dropped from the index. And usually, that helps you to figure out a little bit more which direction might I need to head to find more information there. John, can I ask something regarding that information? So we have an issue with the website that every time we use the mobile-friendly test tool, the CSS is not loaded. And it just says other error without specifying exactly why it wasn't loaded. But it does show up in the fetch as Google smartphone render image. So it shows exactly like it should. But in the mobile-friendly test tool, the CSS is never being loaded. I'm not sure if it's too slow to be loaded. So it's a performance issue or it's just a problem with the tool and we should ignore it. It's hard to say. My initial guess is it doesn't matter because if we can pick up the content, that's fine for us. For indexing, we're also a little bit more flexible with regards to how quickly we need the content. Whereas with these interactive tools, we set the timeouts very aggressively to make sure that we can give you a result as quickly as possible. So probably what you're seeing there is we see a number of requests from this page, including the CSS files. And for whatever reason, we don't get to CSS files. And we just say, oh, there is kind of what we got so far. And for indexing, we're more patient. So if we've been able to cache the CSS before or if we need to run through a second time to actually get the CSS, then we can do that as well. But that's kind of the slight difference with the live testing tools and the way that we do indexing in that we want to give you a result as quickly as possible in the live testing tool. And we're able to take a bit more time with indexing for that in general. But if it's just CSS, then I wouldn't worry about it too much anyway. But if it works with fetch and render, it should be fine. Yeah. That kind of points at like a subtle timeout type thing where we want to give you a result quickly and one tool is a little bit more patient than the other one. But in general, our indexing systems are even a bit more patient. And in particular, our indexing systems are able to aggressively cache a lot of these resource files. Whereas the testing tools, they always want to get the freshest version to see what currently is the status. But generally speaking, if I want to see something as close to as how Googlebot sees it, is the fetch and render tool better than those other separate tools? They should be kind of equivalent. Oh, OK. It always seems to work with fetch and render never since it's probably just a timeout issue, as you mentioned. Yeah. I mean, what you could also do is think about how many requests you have on the page if it includes a lot of different content from different pages. Maybe that's something you could simplify anyway. But like I said, usually our indexing system is a bit more patient. OK, thanks. What happens with my site when I set a noindex on product pages? When we reprocess those pages, we'll see the noindex and we'll drop those pages from our search results. So that's fairly straightforward. Our website has been crashed related to the products. Sure. Sorry. So recently, this huge Chinese e-commerce announced that they've been using AI to create content for their product pages automatically. I have two questions. The first one, does impact the quality of the content on the product page in the e-commerce of the text itself? I think that depends a lot on how you generate those pages. It's something where traditionally, if you've auto-generated content for four pages, then traditionally, you'll see that this content is really low quality and really hard to read. And it's kind of just a mix of different keywords. But I've seen some really, really high quality content that was generated through, essentially, metadata on a page. So just because it's auto-generated through some system doesn't necessarily mean that it's low quality content that should not be indexed. But you really need to be careful that you don't let it lose. And it's like, oh, I checked 10 pages. So let me generate 10 billion pages based on my database. Probably that's a bad idea. Cool. Our website has been a leading specialist in a certain sector for the past five years. In the recent month, let's see, we've seen a rankings drop, we suspect, due to the domain authority of other bigger sites. We know we need to 10x our content. But equally, some of these sites are offering really poor guidance on an important topic for the user. What steps is Google taking to protect specialist markets from being dominated by large brands when quality content may be jeopardized in the interest of commercial gain? What steps can we do to fight back? I think that's always a tricky situation to be in. It's not really specific to online and to SEO. It's always a hard area to be in when you're focusing on a very small niche and suddenly big players come in and they also try to generally cover that niche as well. That makes it really hard to compete with sometimes some of these really big sites. So I don't have any specific guidance in regards to what you need to do to fight back other than to really play your strengths and show how the things that you do are different than what a generic large player can do. So to differentiate yourself appropriately. With regards to search, it's certainly something that from our point of view, we don't blindly assume that big websites are better than small websites. We really need to make sure that we have a variety of content in our search results. And that can be a variety of some big sites and some small sites. That's not necessarily something where we'd say only big sites are able to be visible in the search results. And often that's very visible, where you'll have maybe one big player trying to generally cover one topic area as well. But small sites might be laser focused and really good in that area of thinking that big players too. So that's not necessarily a situation where I'd say you need to give up and move to something else. But you really need to stay focused and stay on top of it. So I guess it's a hard situation to be in. John, quick one on that as well. In regards to large publisher sites, when they've got multiple domains and then within a particular niche, they start putting content on those different sites they've got, even if they're not related to the niche. So it ends up as in all large domains. They all start trying to rank, but then under one publishing house, do you know what I mean? Yeah, it's quite difficult again to tackle those bigger brands. But when they all belong to one publisher, it's even more frustrating. Yeah, I totally get that. And if this is something where you're essentially seeing the same content just across a bunch of different domains, I'd love to see those examples. So you're welcome to send them my way, ideally with queries that are kind of general, so not like copy and pasting the title of one of their products and seeing, look, there are five different domains from this website with the same product listing, but really kind of general queries where we can look at the results and say we're not providing the variety of results that users expect. And we need to take action to kind of make sure that these results remain something that are useful for users, and not just pointing at the same thing over and over again. But that said, it can be completely normal for one website to have multiple results in the same results page or one company to have multiple companies that are also competing in the same results space. For example, you might have one company that sells a product and one is offering it to consumers and the other is offering it to resellers, then it's a slightly different market, slightly different offering that they have there. So it's not the case that we would always say no single owner should ever have multiple results, but we really need to kind of weigh the different options that we have available there. And again, if you see this kind of taking over hand that individual sites are just showing up everywhere in the search results, then that's something we'd love to have examples for. All right. Thanks, Joe. Should listings websites such as apartment listings or hotels expand the number of localized pages on their site to better serve Google's understanding of their architecture and how the site is relevant for near-me queries? Or is it enough to have one page that can provide the results for any location? I didn't quite understand how this would be set up. So if you have a set of different hotels, then obviously you have those hotels in different locations. So you automatically have those individual location pages. And so it's something where I assume you already have these pages index anyway. You already have these pages index anyway. I don't think it would make sense to kind of create separate hotel pages for locations where the hotel is not actually located. I think that would confuse users more than it would actually help a website regardless of any SEO aspect there. So I assume probably you have a different use case in mind rather than just apartment listings and hotels, which kind of by design have different locations already. So if there's something more specific that you're looking for there, I maybe post another question in the next hangout so I can take a look at that. Can two Google meta tags be combined together? For example, meta name Google content equals no translate, no site search box. Or should they be separate tags? Yes, you can combine them. I'm not 100% sure that these are all meta name equals Google meta tags. But meta tags, in general, the content values that you have there are separated lists. So you can combine them as appropriate. So it makes things a little bit easier, I assume. Does Google prefer RSS feed over Google News site map? One indexing news. Both of these work. I don't know how Google News site maps currently are being processed with the changes on the Google News side, but in general, both of these work. With RSS, you can set up Web Pub, which is a way of getting the changes to your feed to Google as quickly as possible. So that might be an option. And for both of these, usually you just have incremental pages in those feeds. So you don't have a comprehensive list of all the URLs on your website. You just have the ones that recently changed. I believe the Google News site map has a limit of 1,000 pages, while an RSS feed doesn't really have a limit. Let's see. One of our pages, which is in a top spot in Google SERP, is getting the index on its own. It's a second time within 15 days. After using Fetch as Google, it comes back. Can it affect ranking? Well, obviously, if it's being de-indexed, then it won't rank at all. So that's certainly one thing. If you're seeing this page come back on its own, or come back after using Fetch as Google, I double-check to make sure that there's nothing technical that's kind of wrong with this page, where sometimes we see the page, sometimes we see nothing, or a duplicate of another page. I try to figure out why this page keeps dropping out. In general, though, there is no guarantee for indexing. So it might be completely normal that our systems drop some pages after a certain period of time where we think it probably doesn't make sense to index a page like this. So that's something also to keep in mind. What series of questions that I would you review to determine, yes, this is duplicate content for context informational blog pages that tackle different angles of the main topic, such as washing clothes? So if you're looking at different pages that cover different angles, then it's not duplicate content. It's essentially separate content. Duplicate content from our point of view is usually when the whole thing is copied one to one. If you're looking at writing different variations of a topic, that's perfectly fine. That said, this feels like you're creating a lot of different content on a same set of keywords and essentially creating maybe lower quality content that's not so useful to users. If you're just spinning up articles in different ways around the same primary topic. So instead of thinking about, is this duplicate content or not, think about how does this actually provide value to the web overall? Do you really have something unique to say on some variation of washing clothes or on 100 variations of washing clothes? And maybe if you're a detergent manufacturer, maybe you do have unique information to share. Or if you're a clothing manufacturer, maybe you do have something unique you can share with regards to different, I don't know, material types or different cuts or whatever, different types of clothing, maybe you do have something unique to share. On the other hand, if you're just trying to rank for these pages to have ads on these page that people might click on, then it feels like you're probably stretching things a little bit and it wouldn't make sense more to make the pages that you do have a lot stronger so that you really have something unique on those pages that people will go to, that people will recommend on their own. To better prioritize optimization of the website for mobile users, what would you recommend to do first? A, create an AMP version of the site, or B, create a PWA compatible site? Wow, I don't know which one of these kids I should pick as my favorite kid. These are essentially two different approaches that you can take with regards to making a website better for mobile users. An AMP version is something that we can show in the search results directly. Usually, that's pretty easy to do, especially if you have a modern CMS, then you could just activate the WordPress plugin, throw that in, turn that on, tweet the UI a little bit, so that you have good AMP pages. So that might be something that's really easy to set up and get running. With regards to PWA, usually that's a really big investment. So that's not something that you can just say, I'll just activate a plugin that turns my website into PWA. That's usually a fairly big development investment where you have to actually redesign your website and recreate it in a way that works in the PWA style. Also, most PWAs are built on top of JavaScript frameworks, which means you have that extra complexity of having a JavaScript-based website that has all of these extra difficulties when it comes to crawling, rendering, and indexing. So that's something to keep in mind that while both of these do help improve the mobile experience, the effort that's required to get there is significantly different. So that's one thing I would certainly take into account. How much development time do you have available? Is that something where you could invest a bit of time to say, I'll do everything to create a fantastic PWA? And I still have enough resources to keep everything maintained and to double check that my tool chain. And everything works well for this PWA website. Or maybe you'll just say, well, it's a lot easier for me to just create an AMP version of my site. And I'll just go down that road first. There is also an option of creating a PWA that's built on top of an AMP. So you could also have both of these if you wanted. Or you might say, well, all of these things are a lot of work for me. I'll focus my time and make a good responsive version of my website instead. Then that's also a good approach you take. So it's not that there's one single best answer for all websites, or all groups of people making sites. You really need to look at the different factors that come into play there. Like to hear your thoughts on the possibility that hreflang tags with EU as an area appear to be recognized in some instances? Yeah, I don't know much what I can say there. It's something where someone on Twitter posted that they were using EN-EU with EU as a country code for hreflang. And from our point of view, in general, we wouldn't process this. Looking into our systems on our side, it seems we did pick up something like EU style there. But as far as I can tell, we don't actually use that information for anything. So that's something where I would say this is not officially supported. It's not something that I would recommend using. So personally, I wouldn't spend too much time focusing on that. It's possible that we do something with that, but it's an undefined state. It's not something that other search engines would support. So I would not build on top of this undefined situation, just because it happens to be doing one specific thing now doesn't mean it'll continue doing that in the future. Let's see. An e-commerce site, a pharmacy sells different strengths of the same drug. Is it better to have them on separate pages or on one page, like one drug, 20 milligrams, another drug, 40 milligrams? My general advice here is if you think that people are explicitly looking for this in different variations, then maybe it makes sense to have separate pages. Whereas if this is just then kind of an attribute of the main product that you're already selling, then I would combine those pages. In general, having fewer pages means you'll have stronger pages, which makes it a lot easier for us to rank those in the search results. So as much as possible, if you're kind of stuck between should I have this page double or should I just have one page instead, I would shy towards making one stronger page rather than having multiple landing pages for different variations. Whereas if you look at this and you see, well, users are explicitly looking for this specific size because there's this one really unique use case where this kind of size or variation of this drug is really important. Then maybe it makes sense to split that out on a separate page. But for most cases, this is probably more like there are different colors of my t-shirt available. And when people choose my t-shirt, they like to choose one of these attributes, one of these colors as well. And it's not the primary thing, the color, but it's kind of like, well, they've already picked my brand. They've picked my designs. And now the choice of color is kind of a secondary thing for them. And in that case, I would move that more into like a dropdown on an existing page. Let's see. Does the ranking factor vary based on queries or on the base of web pages or both? We use lots of different ranking factors. And they do change over time. They change based on queries, users, personalization, a lot of different things. It's not the case that you could create one set of ranking factors that are valid for one specific URL or that are valid for one query and assume that these ranking factors will not change. These things change all the time. So that's something that is not really something you can tie down and say, well, this is the top list of ranking factors. But for this type of page, this is the top list of ranking factors because these things do change quite a bit. And they can change significantly and really shuffle things around in the search results. Hi, John. Last question from me. If we've got some legacy content, all content that maybe we've neglected could be in focus in our efforts elsewhere. And the user clicks on that has a poor experience and clicks back to the server. That's obviously bad for SEO. Is that something that would affect that page only? Or would it have an effect on the rest of the website? We try not to use signals like that when it comes to search. So that's something where there's lots of reasons why users might go back and forth or look at different things in the search results or kind of stay just briefly on a page and move back again. That's something that's really hard to refine and say, well, we can turn this into a ranking factor. So I would not worry about things like that. That's something when we look at our algorithms overall, when we review which algorithm changes that we want to launch, we do look into how users react to these changes. But that's something that we look at across millions of different queries and millions of different pages. And kind of to see, in general, is this algorithm going in the right way or is this algorithm going in the right way? But for individual pages, I don't think it's something worth focusing on at all. Yeah. We're kind of just like at the point where we're looking at our content from three years ago. I'm wondering whether do we need to make sure that's updated or make sure the user has a great experience or whether we focus on what's making us money and stuff like that. I think that's something that's always worth reviewing. But I wouldn't do that primarily for SEO reasons. I kind of look at it from a user experience point of view or from a content management point of view. It's like we have all of this old, crafty content. Should we clean it up? Should we throw it away? Is this something that will distract users if they find it? Kind of look at it more from those aspects rather than purely in a SEO point of view. Yeah. OK, thank you. All right. We're getting close to the end of time. Any other questions from your sides that are left? Hi, John. I have a question. Hi. In new Google Search Console, there is a section called index coverage. In that one, there is a section called excluded. How Google is saying that those URL are excluded from their search engines? Usually in there, we list the individual reasons for URLs, why they're excluded, which could be that maybe there's a no index tag there, maybe the crawl issues or other problems with those pages. And I would look at those individual sections in there. I believe we also have a list of some of these in the Help Center. So I check out the Help Center with regards to that. It's not the case that someone is manually looking at your URLs and saying this one yes and this one no. It's really their reasons behind it from our side, where we say, some of these, we want to index. Some of these, we don't want to index. And some of these, we can't index. And we try to give you as much of that information as possible. Actually, all the URLs are like review pages. And those are excluded from the search. That's why I'm asking this. And what specific reason is given there for excluded? Just to mention that it's a crawl, but not indexed. Crawl, but not indexed. That can be a normal situation, too. We don't index all pages. Maybe we've seen these, but we feel it's not worth the effort to actually index all of these pages. So that can be completely normal. It's something we try to give you that information so that you're aware of that. But for the most part, that can be normal. It can also point to maybe other pages that are similar on your website that are just kind of borderline from a quality point of view that we think maybe it's not worth investing too much time into the rest of this website as well. So significantly improving the quality of your site might be an idea. OK. Thank you. Cool. All right. So with that, it looks like we still have a bunch of questions left. One thing that I thought I'd pull out from the questions is, let's see, Sebastian mentioned one of his sites that lost a lot of traffic in the French market. I need to double check what exactly is happening there with one of the teams here. One of the things I've noticed is you have a rel canonical set from your home page to the French version. And I wonder if that might be throwing some things off a little bit there. So I double check the setup with the canonical, the hreflang links that you have there. But I'll also double check some things on our side where we might be picking up something a little bit incorrectly. One of the challenges there looking at that page is it's really hard to tell how much of this website is actually live versus how much of this website is essentially not there, where we might say it looks a bit like a parking page or it looks a bit like an ad landing page rather than an actual website. Because probably a lot of your content is behind the login. So that makes it really hard for us to find more information there. But I'll double check that with the team here to see if there's something more specific that we can point you at. And if you post your question for the next hangout, maybe I'll have something more to say there. All right, so with that, I think it's time. And I'd like to thank you all for coming in and asking all of your questions. I hope this was useful for you. And hopefully I'll see you all again in one of the future hangouts. Thank you. Bye, everyone. Thank you for answering. Thanks, Joe. Bye. Thanks, Pauline.