 OK, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a Webmaster Chance Analyst here at Google in Switzerland. And part of what we do are these Office Hour Hangouts for webmasters, publishers, anyone who's kind of interested around search and has a website. A whole bunch of questions were submitted today. But maybe I'll just start off with something from your side, if anyone wants to go ahead and ask a first question. I'll ask a question, if no one else will. Go for it. Is there any plans to bring back the detail in Webmaster Tools for where your 404s originate? You can't see what the linked pages are at the moment. So whereas you used to drill, be able to drill down and say, well, actually, it's coming from here, linked from. But that's no longer an option. So the report itself becomes almost pointless, because you can't fix what the actual error is. Good point. I know there was some discussion around that internally, but I don't know what the current status is there. I'll poke at it again. OK, thank you. I also have a question. How would it affect de-duplication between multiple languages on a website if some of the pages that I have hreflang, if they are sort of like no indexed or redirected, how would that affect de-duplication? OK, so with hreflang, it's always a connection between multiple pages. And if any of those pages don't work, we just ignore those specific pages. So if there are individual language pages that are no index, or if they're redirecting somewhere else, or if they don't have a return tag or anything like that, then we would just ignore that one page. So it's not that we would kind of say, well, something is broken, and we will ignore everything, or we'll ignore it for the whole website. It's really just that individual page that we would ignore. OK, great. Thanks. OK, there are tons of questions submitted. So I thought maybe I would just run through them fairly quickly so that we can try to get as many of them covered as possible. Some of the questions are kind of long, so I'll try to simplify them. Let's see, the first one I have here is we're finding that Extended Stay America is the default 3-pack for any queries like Extended Stay and the city name. Kind of, it feels unfair to other websites that are offering similar things. So I think two things here. On the one hand, I don't know too much about the 3-pack, about all of the Google My Business rankings and how all of that works. So that's kind of one thing there. The other thing is I'm not completely aware of this query and how specific that might be to that particular website. So it sounds a little bit like people are trying to find that website, so we are trying to show that website. But I'll kind of pass this on to the team here to take a look at as well just to make sure that they're happy with the kind of results that we're showing there or if there's something that we need to improve on from an algorithmic point of view. We generally add informative text content below our product category page related to the specific product and category. Will it be OK if we add article schema with product breadcrumbs and schema on product pages? So you can definitely do this. I don't know what advantage you would have by including this kind of markup on a product page because when it comes to rich results, we generally pick one type of rich result and we show it like that. So probably you're not going to see a big advantage of adding article markup to a product page. The other thing I kind of caution against is just arbitrarily adding text to a page doesn't make it better. So one thing we sometimes see is that people take a product category page on an e-commerce website and add five Wikipedia articles of text below the category page just to make sure that all of the possible keywords are covered. And that doesn't help us at all. On the contrary, our kind of keyword stuffing algorithms might pick that up and say, well, there's just so much random stuff here on this page. I don't know what we should rank it for at all. So I would really focus on informative text that people actually read and see if you can provide value like that rather than just trying to dump as many keywords into a page as possible. A few months ago, I asked how to recover my site from an algorithm change. As per your suggestion, I removed 70% of the non-performing low-quality pages from the index and the process is still ongoing. And the question goes on with I would like to add even more content to my website. So I think that's something where you have to figure out what the right balance is for you. I'm still not completely convinced that these kind of pages, like the one that you link to, are that valuable. But that's ultimately up to you. But the same thing applies to anything new that you're adding. It's like find a balance for you that works and pick the kind of pages where you would like to add more value to the web and focus on doing that. What's the best way to contact you about an issue or you ask more information about? Usually, these Office Hour Hangouts are a good way or contacting me on Twitter is an option. If you have more information in a help forum thread, that's also an option. I see you link to a forum thread there, so I'll go take a look at that afterwards. My website is still not showing up in Swedish Google Discover, but outside of Sweden it does. Do you know if Google are able to block specific sites from showing up just in one country? I am not aware of anything manual like that with regards to Discover. Usually, that's a pretty algorithmic setup, but I'll double check with the team here. One thing that I can notice is you have a bunch of websites that you mentioned there. So I wonder if you're really focusing your effort on one strong website or if these are websites that might be worth combining into one stronger website. I don't know if they're all your websites or if they're other people's websites, though. But I'll pass that on to the team that works on the Discover feed to see what they might find there. In general, Discover is an algorithmic, organic feature, so it's not the case that every website in every country would show up in Discover. It's also based on what users are interested in. So just because you have content doesn't necessarily mean that we would show it there. But I'm happy to pass it on to the team. With regards to the 4U articles in Chrome, I don't know if that would be considered the same thing. I imagine there are some aspects that we share across these different kind of bubbling up information platforms, but they're pretty unique sometimes. The new Google News Publisher Center requires an RSS feed for including articles. And it goes into a lot of detail on specific tags within the RSS feed. I am happy to pass that on to the Google News team. I don't work on the Google News side, so I don't really know what's all behind that. But I'll pass that feedback on to the team. I think you're also active in the forum, so escalating it in the forums would also be a good opportunity. A question about alt text in photos. I have a website for a cancer clinic that has many pages and photos. Let's say on a page about breast cancer, there's a photo of a room with devices. Should I write in the alt text only what's in the photo, or can I use keywords about cancer and treatment and all of that? So alt text is an alternative text for an image. It's not kind of that catch all place where you dump all of your site's keywords in. So I would really focus on what you're showing in those images or what's relevant with regards to those images. So don't just blindly include all of your site's keywords there, because that's not really going to be useful neither for users nor for our algorithms. So we use alt attributes to understand the images a little bit better. And it doesn't really help you that much if you include all kinds of keywords specific to an image in the sense that in the best possible case, we would show that image to people searching for those keywords. But if that image is not relevant for their interests, they're not going to go to your site. So just appearing in the search results isn't something that is really that useful when it comes to specifically around image traffic. This is a follow up to some questions regarding a big drop in rankings with our website. I've seen you and your website around for a while now. And it's kind of frustrating to see that it's not picking up from your side. You list a whole bunch of things that sound like good ideas that you've been working on. So I'll definitely pass this on to the team to double check to see if there's anything more specific that we can point at. So I think your idea at the moment is to change domain names again, which is a pretty drastic measure. So that's something where I'll double check with the team here to see if we can point out something more specific that might be worth looking at, or if it's more a matter of, well, the web has just changed significantly over time and things that used to be very, very relevant in the search results are maybe not so relevant anymore for specific queries. But I'll double check with the team here. Purchases made by consumers on e-commerce sites are a sign of Google ranking. No, we don't know what people purchase on e-commerce sites. So it's kind of hard to use that as a ranking factor. In Search Console, it looks like I've had little traffic from Discover in Sweden in the last weeks, while Google Analytics said I haven't had any traffic from Google Discover. Which of these sources should I trust? I would focus on Search Console. So specifically for Google Discover and the different kind of feeds that we have like that, from my understanding is there is no specific refer that we would use for that. So looking in analytics, you probably wouldn't see much information there. Whereas in Search Console, it's really focused on giving you information about the clicks and impressions where we actually showed your site, which is generally a little bit more information for Discover. When it comes to Discover, I believe the refer that we send with the user traffic is more like a generic Google refer because it comes from one of the many Google surfaces. But in Search Console, you would see it more specifically. Sometimes we had the same hreflink present multiple times up to 20 times on category pages. Can Google penalize us for this? Or should we mark them as nofollowed when they're repeated? No, Google will not penalize you for having the same link multiple times on a category page. Sometimes that just happens for technical reasons. Sometimes that's just the way your website is built. That's not something that we would consider to be kind of malicious or spammy. Should we mark them as nofollowed? No, you don't need to mark them as nofollow. Again, that's perfectly fine. We see that link on the page. If we see it multiple times on a page, it doesn't change anything. With regards to seeing things multiple times on a page, the one thing I would watch out for is more keyword stuffing. So if you're mentioning the same keywords, not like 10 or 20 times, but 200, 300, 400 times on a page, then that's something where we might say, well, this is a lot of duplication. And we don't know if perhaps that's something that we need to watch out for. But just having something like a link 20 times on a page, that's absolutely no problem. Can breadcrumbs schema on home page give any benefits over non-breadcrumbs home page schema? I don't think you'd have much advantage when it comes to the home page of putting breadcrumb structure data on that page, specifically because there is no kind of breadcrumb trail that leads to your home page. It's like the start of all breadcrumbs, right? So from that regard, I don't think there's any advantage to there. Some sites just use breadcrumb schema on all pages. And if it's easier to put it on all pages, rather than to individually pick and choose, then having it on the home page is perfectly fine. But I don't think you'd see an advantage. How can we change the content, website name, and the Apply button in Google for Jobs when it's wrong? Adoption of the title tag, image tag, taking jobs ads in and out with the API was not successful. I honestly don't know how that works in Google Jobs. But if you want to drop in a link to the search result or the site where you're seeing this, then I'm happy to pass that on to the team to have them take a look as well. We've been dropping in Search Console, average position in general, and for relevant keywords, at a rate of 0.24 and 0.1 each month since 2017. That's pretty exact. I looked at the backlinks and we have and 50% of the total backlinks come from a subdomain generated by a tool we use since 2012 and stopped using three years ago. This tool allowed us to publish help guide material for our users despite eliminating all of the generator posts one year ago with this tool. All of the URLs remain active. Could this be directly related to a dropdown? What would you suggest to do? Does disavowing a domain with half of our back rank will not affect our rankings? So I think starting with the last one, if you disavow a significant part of your natural links to your site, then yes, that can negatively affect your ranking. But that would be negatively affecting your ranking. Probably not what you're looking for. In general, if this is content that you published, which is help documentation, extra material about your website, about the product or services that you're offering, then that seems like perfectly fine content to keep indexed and not something where I'd say you need to block that from being shown in Search or you'd need to block those links from being picked up on. So from that point of view, I don't think any of the changes that you're seeing are related to this content and those particular links. But it's probably a more general thing. So in particular, when you're seeing a gradual drop over a longer period of time, then that, to me, points at natural ranking changes where things in the ecosystem have changed, things in our algorithm are slightly changing, maybe users are searching in different ways or expecting different kinds of content in the search results. And that generally wouldn't be a sign that there is this one thing that you're doing wrong, which made everything blow up. If you're seeing these granular step-by-step changes over a longer period of time. So that, to me, would be something where I try to take a step back and try to look at the website in general overall and find maybe areas where you can make significant improvements to kind of turn the tide around a little bit and make sure that your site becomes more relevant or becomes significantly more relevant for the kind of users that you're trying to target. What's the difference between index pages? So site colon and URL on Google.com versus Search Console coverage, which one is more accurate? So in general, I'd say the Search Console index status report is extremely accurate. That's the one that's really based on pretty much all of the indexing that we have. A site colon query is a restrictive query, which means we would try to only show results from that particular site. But it's not an inclusive query in the sense that it includes everything that we have indexed from that website. So sometimes you will see things like you do a site colon query, and it shows you, I don't know, a couple hundred results, and you look in Search Console and it says you have 100,000 pages indexed. And from our point of view, that can be completely normal. So that's not necessarily something we would consider to be wrong. The other thing is that if you have a larger site and you look at the count that is shown in a site query, then that's something that is generated with a focus on speed, not with a focus on accuracy. So that as well can be something where the number, what we show, of about, I don't know, 100,000 pages, that's something where that number can be significantly different. I've seen cases where it's off by a factor of 10 or 100 or even more. So that's something where if you want to have a number to focus on with regards to the pages that are indexed for your site, then definitely use the Search Console coverage report and don't use a site colon query. And the same thing applies to in URL queries if you're kind of doing it the other way around. On an e-commerce site with around 100,000 products categorized in many different attributes, Googlebot will be dealing with 100 million plus pages generated from all sorts of filters. Google is setting its own canonical instead of ours. This is giving us lots of low quality URLs index. Is it OK to disable filter links that don't have a search intent only for Googlebot to eliminate a number of URLs for Google to process? It's perfectly fine to put no index on these kind of pages. So that's something where from my point of view, really focusing on the pages that you want to have indexed is kind of what I would do there. In general, when it comes to category pages, I would try to find one kind of sorting order and a general filtering that makes sense overall and let us crawl and index all of those paginated pages from that setup. And then pass that. If you want to have individual facets that you think are relevant and useful in Search, then that's fine. But everything else I would find a way to block, which the easiest way to do that is just to use a no index on those pages. On a product page with a load more JavaScript button, generally how many times do you trigger the load more before you stop? How do you detect the load more button if it's an image button or has an unusual text other than load more? So my understanding is we would not click on this at all. I believe in the past, we would try to figure this out and try to trigger some of these to see what would happen. But that's extremely expensive when it comes to rendering the whole web. So I believe we don't do that at all. What we do instead is we do something we call frame expansion, which is we take the page that you serve us and we try to render it on an extremely long viewport. And we see if the page does anything like infinite loading where it expands the viewport. And that's something where we would expand that viewport once and see what all loads and try to index all of that. But we would not do this multiple times in a row, because especially when it comes to infinite scrolling, like the name says, some websites actually do have pretty much an infinite scrolling where you could expand this a whole bunch of times and you wouldn't necessarily find new content. You just find repetitive sections across the whole website. So if you're using a load more button with JavaScript, then that's something we probably wouldn't use. That's something I try to reconsider. If you're using infinite scrolling, then double check our documentation to make sure that you're implementing it in a way that works well for Google for indexing. Let's say ranking factors could be measured in bananas. That's crazy. How many bananas would you give URL structure in comparison to a title tag? And what is your future prediction about this? So I would give it seven bananas. I think that's a good number of bananas. Actually, that's a lot of bananas. Now, I don't know. I think you're generally asking, is the URL structure more important than a title tag or not? And these are vastly different things. So that's not something where I'd say you should focus on either one or the other, but rather, if you want to work on your site in a holistic way, then try to make sure that everything aligns. And I don't think that's something that will significantly change in the future. We will definitely try to understand which parts are relevant and treat that appropriately. But sometimes we can pick up a little bit from the URL structure. Usually, we try to pick up most of our information about our page from the pages content itself, which would include the title tag. But just blindly focusing on URL structure or title tag, I think, is way too simplistic. We operate one of the largest stock photo agencies with millions of visitors from Google Monthly. We have experienced the issue of Google folding together localized versions of our website. I described the issue in the forum thread. We implemented hreflang canonical for many years without any issue. What's the issue now? I don't know. It sounds like something where I'd have to take a look at the details. With the forum thread, I'm happy to pass that on to the team here to double check to see what might be happening there. One of the common sources of confusion with regards to hreflang is if you have exactly the same content, such as the same language content, but for different countries, then that's something where our indexing systems will often say, well, this is a duplicate of the other page that you have, which would be correct because it's the same language content. And in cases like that, we would understand that this is a duplicate. We'd fold it together for indexing, and then we'd try to fold it apart again when it comes to showing it in the search results. So in the search results, we try to do the right thing. However, since in indexing, we fold it together. In Search Console, the reporting is quite confusing there in that we would only report on the canonical version that we use for indexing. I don't know if that kind of aligns with the thing that you're seeing there with regards to your website, but that is something that is a bit confusing, and I think which would be nice to have a little bit less confusing on our side, but it's a pretty hard problem. But anyway, I'll double check your forum thread and see if there's something that we need to pass on to maybe the indexing team. I have a Canadian domain with Search Console geo-targeting set to Canada, but I've noticed that the US traffic has been increasing lately and surpassing my Canadian traffic. My site is an e-commerce site, and this is causing a decline in conversion rate. Is there a way to correct this issue and communicate with Google to rank my site mostly in Canada? Should I add hreflang tags? I don't have a .com. So I think the tricky part here is we don't have a mechanism for people to say, don't show my site in this country or don't show my site in other countries. But rather, when we find one website, we may assume that this is something that is globally relevant, even if it's on a country code top level domain. So in particular, it's completely fine to have a global website that's on a country code top level domain, and we'll try to rank it globally where we think that it makes sense. And I think that's probably what's happening here in that we think your website is pretty good, and we would like to show it to as many people as possible. And we don't realize that actually you're not able to serve them or you don't want to serve people in those specific countries, so you'd prefer not to be shown there. This is something that sometimes comes up where it's like you don't want to have your site shown for people in other countries. But so far, we haven't really had any strong reason to try to provide extra functionality or meta tags where you can say, discourage my site from being shown here or only show my site here. In particular, geo-targeting is something that helps us to promote a site in a specific country, but it doesn't mean that we won't show it in other countries. What I would try to do here is just place a banner on your website so that when users come to your website, they're aware of what you're targeting or what you're able to provide. But in general, that's not something that would be seen as a negative on our side. Also, with regards to the conversion rate, that's not something that we would use for Google Search. So at least from the Google Search side, that's not something you'd need to worry about there. It's probably more a matter of understanding your site's analytics a little bit better and then finding a way to make sure that you're not misled by seeing numbers that are irrelevant for your website. I've been having an issue lately where Google is using text for my site's navigation in place of a meta description. It's content that's not useful for users, and this is across multiple pages and queries and sections of the site. I understand you'll sometimes replace the meta description, but this is kind of puzzling. I, this can happen, this kind of thing. In particular, when, in particular, what happens is when someone searches for something and we would show pages from your website, we try to show them a snippet of your page that matches what they were searching for. So to give them a little bit more context about why your page is the best one that they should be clicking on. And if we can't find that information within the description meta tag, then we'll try to pull out some information from the rest of the page. So that's something where if the best matching, or I don't know, the most relevant match that we can find is from your site's navigation, then we'll try to pull it in from there. But one thing I will try to do in a case like this is just to make sure that your site's description meta tag or the tags that you have on the individual pages that they're actually kind of aligned with the queries that you think would happen there and that they really describe what is happening on this page, that it's not kind of a generic set of keywords that you're providing, but rather really a good description that we can show for these kind of queries. And the other thing I would watch out for is if you're using a site colon query to pull out these pages and to double check them, then keep in mind that the site colon query is an artificial query from our side. And the title and the description that you'll see in the search results does not necessarily match what a user would see when they do a query on their own. So instead of doing a site colon query, make sure you're using the queries that are shown in Search Console so that you kind of see a little bit more what users would actually see. I'm confused by the mobile usability report in Search Console. I have a main site and a mobile site, which are correct with canonicals. The problem is that the mobile pages that Google reports via the Enhancements tab seem to be steadily dropping for no reason. These pages have been OK, and there's no error showing. So one thing to keep in mind with all of the enhancement reports, and in particular also the mobile usability report, is that we report on a sample of the pages from your site. We don't report on the complete set of index pages, but really just a sample. Sometimes that sample can change, and that sample can change over time as well. So if you're seeing that all of your pages are mobile friendly, for example, and the number of pages that we show as being mobile friendly are dropping in that report, that's not a sign that you have fewer mobile friendly pages, but rather we're just reporting on fewer pages from your website with regards to mobile usability. So instead of the absolute number of pages that we report there, I would focus on maybe the percentage or the part of those pages that have issues and use that as a guide to help resolve things. So don't focus on the number of pages in the enhancement report, but rather focus on the issues that are flagged there and their relevance with regards to the total reported pages. After the January Google update, the amount of Googlebot crawling on our site has been cut in half. And during this time, I didn't make any changes to the site. Let's see. So I think, in general, with regards to crawling, we have multiple things that come together. On the one hand, we have the crawl demand, which we call it, which is a number of pages that we want to crawl or that we think are important to crawl. And that's something that can change over time. That's something where also with algorithm changes, it can change over time as well, where if we think that a site is perhaps less relevant overall, then maybe we'll not crawl as many pages from that site because we're finding trouble showing it in a relevant way in the search results. So we don't want to put an extra load in situations where we're not able to actually present that page up well in search anyway. So that's the one thing. The other side is kind of the load that we can put on a server from a technical point of view. And that's something that we determine automatically. That's based on the speed of the server and the errors that we see back from the server. So if we find that a server becomes slower in responding or a server is sending us more server errors, so 404 errors are fine, but more kind of like the actual server errors where the server is saying, oh, I can't deal with all of these requests, then that's something where we'll decrease the amount of crawling as well. So those are the two aspects there. On the one hand, the demand. On the other hand, the load that we can put on a server. And the other thing that also plays a role when it comes to crawling is our algorithms try to find ways to optimize crawling in general. So in particular, if you have a lot of complicated URL parameters and over time, we'll try to learn which of these parameters are actually relevant, and we'll try to focus on the relevant and useful parameters instead of all of them. So that's something where you can also see a decline in crawling, which is essentially a good thing because it's like Google is learning to crawl the more relevant URL. So one way you can determine if that's the case or not is to look at your log files and see what Google has been crawling in the past and what Google is crawling now. And if Google is just crawling better or more efficiently now, then the decrease in crawling can be perfectly fine. It might be a good thing. On the other hand, if you're seeing that this decrease is not based on a better sample of URLs that is being crawled, then I would try to figure out, is it because my server is slow and problematic, which you can see maybe in general the server response time that you have. And looking at your logs, how many server errors are you returning in general? Or it might be that our algorithms just think that maybe it doesn't make so much sense to focus as much on this website in general. And that could be a subtle hint that maybe there's something that you can do to significantly increase the quality of your website. The question goes on. Google launched a number of updates this week. I saw a big fluctuation in third-party tools on the 18th of February. My keywords have been missing over 20%. And Search Console shows on those dates lost 90% traffic. Our site has been doing all of the right SEO, like long-term stable link building. Can you tell me what I should do in this case? So I don't know. This long-term stable link building, that seems like a slight red flag, perhaps. I don't know exactly what you're doing there. So that's kind of something where I'd be a little bit cautious. I'm not aware of anything specific that launched in the last couple of weeks with regards to Google Search rankings. So to me, this sounds more like our algorithms are trying to figure out a little bit better where we can show your site in a relevant way. And sometimes that can result in a less visible site overall. So that's something where maybe I would go to the help forums to get some input from other people as well, just to make sure that you're not missing anything obvious. If we have a canonicalized page, page B to page A. But after a while, we thought it was good enough to have content on page B that's not present on page A. So we decided to self-canonicalize the other way around. Will Google consider it? How will Google crawl and index it? How much time can it take? You can change canonicals. That's totally up to you. If you go from one page to another, that's if you want to do that, that's perfectly fine. That's something where, like a redirect, it can take a bit of time for us to process that and to switch things over. And the rel canonical is for us one of the signals with regards to which page to pick as a canonical, but it's not the only one. So if you're changing which URL you want to have shown as a canonical, then make sure you align all of the other signals with that decision as well. So in particular, things like internal linking, sitemap files, if you're referring to this page in other ways, such as in structured data with hreflang, then make sure all of that aligns with your choice of canonical. But ultimately, that's your choice. We try to pick that up if we can. We can't guarantee that we'll pick the canonical that you choose, in particular, because if you're using the rel canonical, you're telling us these pages are equivalent. And if these pages are truly equivalent, then it doesn't really matter which of these we index. It sounds like those pages are not really equivalent on your side. So that's something where maybe it also makes sense to reconsider how you're doing canonicalization there with regards to pages actually being duplicate or not. The images on our site aren't indexing. I don't know why. I analyzed the log files and noticed Googlebot image gave 200 status code. And when I tested the pages with images, I saw other errors with regards to all images. I can't be connected. OK, I solved it. Don't know specifically what you're testing there. But I'd probably need to take a look at the site more exactly to see what might be causing your problem there. In general, when it comes to images, we need to have image tags on the pages so that we can recognize which images belong to which pages. And we need to be able to crawl the images that you specify there. So in particular, if you're doing something like progressive enhancement or responsive images where you're swapping out images depending on the device or depending on the viewport or kind of lazy loading those images, then you need to make sure that Googlebot is able to understand which images actually belong to that page. So having clean image tags pointing at the actual image URLs is really critical for us. So that's something where some kinds of lazy loading, they cause issues in that we can't recognize which image you're actually pointing at. So that might be something where I double check those pages to see, is Googlebot actually able to see image tags on that page with a proper source attribute there? And then double check those images with regards to, are they able to be crawled with regards to the robots.txt file? The tricky part here is that the Inspect URL tool, which you can use to test crawling, only applies to web pages. So you can't test images with the Inspect URL tool. But what you can do is use Inspect URL for the HTML page where you have the images embedded, and then look at the rendered HTML code that's in a separate tab there where you can double check to see if those image tags are actually being rendered and recognized. Is photo metadata important or not? It's not going to make or break your site. So it's not the number one ranking factor. But you can put some more information in photos. And sometimes that makes sense for us to understand. I would say for most websites that are just interested in having the normal web search rankings, well, for web search rankings, you definitely don't need photo metadata. With regards to image search, that's something where for most websites, probably the photo metadata is not important. We recently launched a functionality where you can specify licensible images on your pages. That might be something where if you have those kind of images that you're able to license to other people, where that might make sense. And that's essentially a kind of photo metadata. If I'm changing my platform from six to WordPress, will it affect my rankings? And having different niche content on a subdomain is good for ranking or not. So if you're changing your platform, definitely it can affect your rankings. Because essentially what you're doing is you're revamping your website. You're recreating a new version of your website. And once when you do that, then that's something that can have a positive or negative effect on your site's ranking, because suddenly your pages are completely different. If your internal site structure is also completely different, then that, again, can have a positive or negative effect on your site's rankings. If you're changing the URLs on your website, then one thing you need to do when it comes to changing platforms is to redirect from all of the old URLs to all of the new URLs. Because if you don't do that, then essentially all of the information we have collected over time from your old website will be dropped, because suddenly all of those pages are invalid. So if you're changing the URL structure, if you're changing the URLs on a site, always make sure to set up redirects. I'm in a news vertical, and I often see stories, timestamps that are not properly represented in Google Top Stories carousel. We run into many instances where the time is being pulled is very old. What could be causing this to happen? We update the last mod for the story in our news site map, the structure data, and the on-page timestamp reflect any new updates. It would be really good to have examples of this, which we can double check. The most common cases where I see this happening is where there is a conflict with regards to the data that we were able to see for those individual pages. And sometimes that's a matter of time zone. For example, if one of the date stamps that you're using has a timestamp, a time zone, and the others don't, then without a time zone, we really have to guess. And sometimes our systems are able to guess properly, but that's kind of, I don't know, relying a little bit too much on Google systems trying to figure it out. So I would really double check those kind of cases to make sure that everything is really aligned, that all of the time zones are specified exactly the same way, so that there's really no confusion at all from Google's side with regards to how that should be presented in search. And if you see cases like this, then by all means send me some screenshots so that I can double check with the team to see what might be causing that. Wow, oh my god, still so many more questions. Let's see, there's something with regards to structured data for academic content. I don't know. So I think that's specifically around kind of the education program that we sometimes show information on in search. I don't know specifics with regards to that particular setup. So it's not something I'd be able to help with specifically, but if you could set up a forum thread, that might be something that we could escalate to the team. I have a travel agency blog. One of our guests wrote a great blog covering the tour that we arranged from. He already posted it on his blogger site. However, he'd like to publish it on our website too. If we publish it on our website, will we get a duplicate content penalty or any other penalty from Google? No, there is no duplicate content penalty. If you're publishing something that's already published somewhere else, then it's not that there's going to be a penalty on this content, but rather if someone searches for something from that piece of content, then we'll try to show just one of those pages in the search results. And that might be perfectly fine. That might be your version. That might be the original version. That depends on a lot of other factors. But there is definitely no duplicate content penalty for something like this. And if you're worried that something like this might be problematic with regards to ranking or you want to make sure that you don't kind of negatively affect the original blog post that they wrote, then you can always put a no index on these pages so that they're findable within your website when people go to your website, but that they don't show up in search. But in general, if this is kind of like every now and then it happens that someone writes a blog post and you also have the same blog post on your website, then that's perfectly fine. That's not something I would worry about. Let's see. My main site is hosted by SiteGround. I also have 48 other sites which are hosted by them. All 49 are in the same account on SiteGround. If I move my main site into a separate account, do you think the traffic for my main site might improve? I don't think that would change anything. We probably wouldn't even know that these are all in the same account on the SiteGround site. So I don't think that would affect anything. The only comment I would have here is if you have 50 websites that you're hosting like this, then sometimes it makes sense to concentrate all of those into one really strong website rather than to have kind of like 50, let's say, mediocre websites. So that might be something where instead of spreading yourself thin like this and creating all of these individual websites, maybe you'd see a more positive effect by having one really strong website where you're really focusing all of your energy into that website and making sure that one website is as awesome as it can possibly be. So that's something where it's not tied to the hosting setup that you have, but just generally speaking, when you spread yourself thin, then it often results in sites that are kind of subpar. OK, wow, we're kind of running out of time and I'm running out of breath. So maybe I'll just open it up for more questions from your side. Is there anything that I can help with? Hey, John. Hi, Zaymos here. Hi. Hi. I was wondering whether you could give some guidance in terms of reporting on performance for a website. So the efforts that SEO has put into optimization of their site, how would you then report that to the business in terms of progress? I don't know. It feels like I have to completely switch gears. Yeah, I think there are always two kind of metrics that come out of work. On the one hand, you have the effort-based metrics where you're saying, I've done all of these things. And on the other hand, you have the more impact-based metrics where you say, well, these things have resulted in these changes. And ideally, you'd have a combination of both of those that you'd be able to kind of pass on and say, well, I did all of these things. These are in line with these best practices. I did these because of maybe there are issues or maybe there are improvements that could be done here. And these are the changes that we've seen so far. And sometimes doing things and the changes that you see are a little bit skewed time-wise so that you do a lot of things and it takes a while for that to be visible. But kind of documenting both of those sides, I think, is useful to do. And in the performance reporting Google Search Console, what I struggle with is reporting on brand versus non-brand performance because of the sample data. Do you have any suggestions, recommendations on how to then communicate that to the business on how things are going from a non-brand perspective for a site that has a loss of traffic coming through brands? I think one of the things I would try to do there is to pull that data out maybe with an API or through the download so that you can separate the branded and non-branded parts a little bit clearer. And that's something that I think with the UI is a bit tricky to do because you can search for one keyword and then you kind of see, well, for this keyword or not that keyword, you see these things. Whereas if you download the full data set and you can filter it out, I don't know, in a spreadsheet or with a small script, then that's a little bit easier to get relevant data for the bigger picture. Great. Thank you. Sure. Hi, John. Just to follow up a little bit on the question, a couple of days ago, is that we saw that we went through a couple of migration. And then when we check in Google Search Console URL inspect, we are seeing that the URL that Google picked for us as canonical is actually not a valid one. It's actually 301 redirect URL. So how is it possible that Google actually picked the URL that we don't use at all? And how can we correct this? For canonicalization, we use a number of factors. So we use things like redirects, definitely, like you have there. We use the rel canonical. We use internal links, external links. We use sitemap files. We try to pick URLs that look a little bit nicer. So usually, those are URLs that tend to be shorter. And all of these kind of are combined with different weights. And based on that, we try to pick a canonical from a set of URLs. So if you're seeing something like what you described, where a URL that is redirecting is chosen as a canonical, then that's usually a sign that there are various signals that are still pointing us at that URL and saying, well, it's redirecting. Sure, but everything else tells us that this is the right one to show. So a common situation where that happens is, for example, if you have a home page that redirects to a detail page, some CMSs are set up like that, where you go to the home page and automatically redirects to some lower level page. Then from our point of view, the home page is probably still the right one to show in search and the right one to pick as a canonical, even though it redirects to a different page. That's something where I wouldn't necessarily see that as a bug or as something that you need to fix, but rather it's almost a hint that perhaps the signals that you're giving us are not completely aligned with what you want to have done. Because we are seeing this happen to, I would say, around 70% of all the URL on the site. So first it got de-indexed and then and they reappear without a trailing slash in the end in the SERP. And those are not a valid URL because we do not redirect through the migration to such a URL. OK, so you're doing a site migration and seeing these kind of changes happening. Is that correct? Yeah, there's a peak. I mean, after migration, it was a couple of months, actually, around five or six months after migration that this happens. So it didn't happen right away, but it just happened. Yeah. Yeah. Yeah, that's something where it sounds like the signals are just not aligned yet. So that could be based on things like external links going to those pages. But in general, if you set up redirects for those URLs, then even if we were to index those URLs and show those URLs directly in search, then users will make it to the right page. So from that point of view, I don't think it's a critical issue that you need to focus on. But I would take a sample of these URLs and try to follow back on where might Google be confused and getting information that this other URL is actually the one to pick as a canonical. Yeah, OK, thanks. Yeah, we'll shake that. Sure. Oh, one last thing about the robot TXT. If we have in our subdomain, there is a robot TXT blocking a certain path, would that affect the whole domain, or would it only stay within the subdomain? The robots TXT is only for a specific host. So that would only be in that subdomain. OK, all right, Daniel. Thanks a lot, John. Sure. OK, let me take a break here. I have a bit more time, so if any of you want to stick around longer, that's perfectly fine. But just to kind of pause the recording at a reasonable time, thank you all for joining in. Thanks for submitting so many questions. I didn't get to all of them, but I think we got through a lot more than usual. As always, you're welcome to post in the Webmaster Help Forum. If there's anything else that's on your mind, there are folks there who are able to help out and who are able to escalate these questions as well. So thanks again, and I wish you all a great weekend in the meantime. Bye, everyone.