 All right. Welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a webmaster trends analyst at Google in Switzerland. And part of what we do are these Office Hour Hangouts, where people can join in and ask their questions around their websites, SEO, and search, and those kind of things. Bunch of stuff was submitted already on YouTube, which is great. But if any of you want to get started with the first question, you're welcome to jump on in. Hi. I have just got a question from one of our clients. So the question is about language tag. Now, we know that we can add language tag in the header section of the website, or we can create XML site map to show other version. So now, which one is better, actually, or both are same? So I think you mean the hreflang attribute, hreflang? Yeah. Both of those are equivalent. We treat them exactly the same. OK. That is what he's asking this question, because the site is built on WordPress. And if we want to add it to the header section, we need to use some plugin. And client is afraid that may reduce the speed, something like this, that is why he is insisting to use XML site map. So do you think it is really a factor? I don't know if it would really affect speed, but it's certainly an option just to use a sitemap file. And you don't have to use the same sitemap file that you use for the rest of the site. You can have kind of your normal crawling sitemap file, and you can set up a separate hreflang sitemap file just for those language connections. So that's something that sometimes it makes it easy to put it in a sitemap file. Sometimes it's easier on the page for debugging. It's really up to you. Thank you. I guess there's also the option of HTTP headers. So that wouldn't require changing the HTML code, but it's still a bit more complicated to implement. Yeah, I guess especially on WordPress, you'd have to do even fancier things. Cool. Let me jump through some of the questions that were submitted, and hopefully we'll have some time towards the end as well for more questions from you all. Or if anything comes up in between that fits, we can take a look at that. So the first one that I have here, I don't really know how much I can say here. So it's about a domain migration. We migrated from one domain to another on August 6. And everything went bad, essentially. And there's a forum thread with a lot more information there. And from a first glance, looking at the sites there, it seems that it's kind of OK. So I suspect I'd need to look into that a little bit more in-depth. So I'll take a look there and also see if there is any escalation from the forum thread that we can help with. Why exactly websites that have been hit by the core update can't recover before the next core update, even if they make good improvements. Some algorithms are launched one time for a couple of months, or how does that work? So with core updates, we're essentially trying to re-understand how the relevance of the search results are. And it's not something that requires a site to kind of wait for the next update to have a chance to be seen differently. They can continue working on things, and things can improve over time. It's possible that our next core update will make a bigger change in the same direction that you've been working, and you'll see a bigger change in your site's performance as well. But in general, sites don't have to wait for the next bigger update in order to start seeing changes. So from that point of view, I wouldn't just stop working on things once you think you've done the right thing, but I continue working in that direction. You should see at least some incremental improvements over time. Are there any tricks to check a website or page if it has quality issues? Like this, for example, the page doesn't show up on the first position for its own unique content, so something can be bad there. I remember in the past, Google hides rich snippets for low-quality pages. I don't know if there is any trick to kind of see the way that our algorithms assess the quality of a website overall or the quality of individual pages, because we do look at different things. With rich snippets, I think it's slightly different because of the way that these are implemented there. So with rich snippets, if you've implemented them in a way that is valid with the validator, so valid from a technical point of view, if it's valid from a policy point of view, and if your website just isn't kind of of the quality that we would like to show rich snippets for, you can sometimes do a site query and check to see if the rich snippets would show up there. It's not something from our side that is done by design. So it's not like a feature that was added from the rich snippet scene that you could debug things this way, but more of a side effect. And it's also something where it's possible that at some point, this kind of side effect will get cleaned up to make it more consistent. I don't know if that's really important, because not a lot of people just do site queries on their own, but it is one of those things that's kind of more of a side effect rather than a purposeful decision. And for other kind of quality issues, I don't really think there is a clear way to kind of understand how Google would see that. In general, when you run across quality issues on your website or when you suspect you have quality issues on your website, then that's something I'd recommend getting input from external people as much as possible rather than just purely looking at the search results. So that's something where my general recommendation is to take the questions that we have in the blog posts that we did for core algorithm updates sometime last year and look at them with people who are not directly associated with your website to see, what do they think about your website when they're kind of experiencing it for the first time, when they have to complete a task on your website compared to other websites maybe, and to really try to get that objective input from someone who's kind of fresh, who's not skewed from a technical point of view, who's not skewed because they know you and they think you always do fantastic work. So that's usually my recommendation there. We're considering changing our navigation on our e-commerce site, and we have a couple of queries. Can you tell relatively quickly whether a new navigation will have a positive or negative effect on a site's ranking once deployed, or does it take a couple of months for things to bet in and see? Also, can it affect things from an SEO point of view depending on where the links are displayed in the navigation or not? Do you have any other tips or advice on this and whether one can revert back to the old style of navigation if it has a negative effect? So the last part is definitely the case. If you make a bigger site-wide change on your website and you revert that, you basically have the previous situation, it's not the case that these kind of things will just flip back, and you will have the previous state for your whole website. But rather, we kind of have to reprocess that and understand your site again with the new state. In general, when you're making bigger site-wide changes on a website with regards to the internal navigation, that can definitely have an effect on SEO. Usually it does. I mean, it's one of those things that SEO sometimes look at when they look at a bigger website. My main recommendation there would be not to change the URL structure unless you absolutely need to. So that's kind of on the side, because changing the URL structure on a website site-wide is something that does take quite a bit of time to be reprocessed. So if you can exclude that and purely focus on the navigation part, that makes it a little bit easier. And with regards to the navigation, the navigation for us as a search engine, I guess, serves two purposes. On the one hand, we need to be able to discover all of the pages on your website, so to be able to crawl through everything and find all of the content. And on the other hand, we need to be able to understand which pages are relevant or important in which context. So which of these pages belong together? Which of these pages is the most important one? That kind of thing. And that's something that you can provide fairly well with a navigation on a website. So one extreme might be you link all pages to each other. And that makes it easy for us to crawl, because we have to look at one page and we find links to all of the other pages. But it makes it hard for us to understand which of these pages belong together and which of these pages are important. So finding a balance between everything really flat and everything really kind of deep, where you have to follow one link after another to find everything, that balance is sometimes a bit tricky. But it is something where you need to kind of look for that balance instead of just purely taking an SEO crawling tool and saying, oh, it says my crawl depth is five. I need to change it to four. That's probably not the right approach. But sometimes these tools can give you some input on things that you might have missed. John, regarding the URL parameter tool, since e-commerce sites tend to use it, especially given the parameters and everything, will that ever be moved to the new search console? Is it planning to be deprecated? Are there any plans with the tool? The data there has been missing for a really long time. But that's not because we want to deprecate it. It's just because things are weirdly stuck on our side with kind of the various teams that are involved with creating that data. Internally, we use kind of similar data already. It's not that we don't follow that input at all. It's just the data that we display in Search Console is kind of stuck just before it reaches Search Console. OK. And my understanding is that that should get resolved fairly soon. But I've been hoping that that's fairly soon for a while now. So I've been nudging a little bit more. Hopefully, that'll get better. With regards to moving to the new Search Console, my understanding is that they do want to kind of keep that functionality and move that to the new tool as well. I expect, especially for larger websites, we'll have some really cool stuff coming out as well over time that, yeah, it'll be pretty cool. I don't want to pre-announce anything, so I'm not going to go into more details there. But it definitely is something that makes sense to focus on. Like you mentioned, for e-commerce sites with a lot of parameters, you can clean up some things there and make it a little bit easier to crawl and index your site. But just out of curiosity, if you're already doing that using canonicals, no index tags, things like that, do you also need to try to make sure the URL parameters tool is set so it kind of tells the same story? No, usually not. Usually not, yeah. Usually, so from what I've seen, there are two situations where the URL parameter tool makes sense. Both are for bigger websites with a lot of parameters. But one is if you really can't set those parameters with canonicals and things on your site and internal linking and all of that, then it's kind of a way to kind of bridge that. And the other is if you have an extremely large site and you have individual parameters that just blow up the whole crawl space, then that's something where it can make a big difference. So those are kind of the two situations. And it's really in both of those, it's for sites that are a little bit bigger. It's not the average kind of, I don't know, 100,000 page e-commerce site feels kind of small in regards to what those tools are kind of meant for. Cool. Hey, John, since we are on Search Console as well, I've been facing some issue with the Search Analytics API. So I'm just wondering how we could actually look into that or raise it in a sense that if I look at my search query data or my search analytics data, I see impressions and clicks. But when I'm trying to pull the data via the API, it doesn't return any entries at all. And to add some context, on one of my domains, it's working perfectly fine. On one of my subdomains, it's not working. OK. My guess is that has something to do with the way the site is verified. Maybe you have something like HTTP verified in the API and HTTPS verified in the UI. Because the back ends for both the API and the UI are exactly the same. So what commonly happens is that the absolute numbers are not exactly the same because the queries are processed slightly differently. But it shouldn't be the case that you have no results in one and lots of results in the other. That definitely shouldn't be the case. So basically what I did was that I used the exact same domain. So imagine if I'm www.x.com. And that's what I verified. And I took that exact same domain user and I plugged it into the Search Analytics API test query on the browser page. It returns a 200, but without any data. So that's the issue I'm facing. So it's valid, but there's no data that's coming back. OK. That seems weird. So I mean, you're welcome to drop your domain here in the chat. And I can take a look at that afterwards. But one thing you might also want to try in the meantime is to check out some of the other tools that use the Search Console API just to see if maybe there is something unique in the way that you had that query compiled versus how other tools would pull that. Like Mihai has a nice plugin for Google Sheets that lets you also use the API to pull in data. And that way you can kind of double check is it that Search Console is not giving me any data because my domain name is slightly different than what it expects, I don't know. Or is it because I'm doing something slightly unexpected where maybe I'm copy-pacing something slightly different? Sure. Cool. Thanks, Mihai. I saw your message. Hi, John. Thank you for taking your time. So I have a question regarding featured snippets. So we have encountered some locations in which there has been a featured snippet in place in Search Results. But afterwards, after some quote, the analysis of Search Results, we came across the same term that we looked up on Google and we found out that there is no featured snippet anymore. So there are two questions for us now. Is there any way we can actively approach a featured snippet by structuring our data in a certain way on our site? And if there happens to be the case that a featured snippet goes missing, what does that indicate for the term and for the Search Result as such? So is the featured snippet not attractive anymore? Should we not try to get the featured snippet if Google removes the featured snippet from Search Results? Are there other reasons for the fluctuation in these instances? Those are good questions. But it's kind of tricky because for us, a featured snippet is just a different way of showing a Search Result. So it's not something that we would pull out and say it's something completely different. So it's something that can appear sometimes, and sometimes it might not appear. For some sites, they see a lot of fluctuations there and the visibility of them. And for others, it's fairly stable. From our point of view, we don't have any explicit guidelines on what you need to do to have content that works well in featured snippets. There are externally some people who have written about ways that you can write your content that works well with featured snippets. I would definitely check some of that out. But from our point of view, it's not that there's a technical thing that you have to do to make featured snippets work or that you have to have one question and an answer, and then you will get that in a featured snippet. A lot of these things are things that our algorithms try to figure out automatically. And externally, people have written about what they've discovered works well or doesn't work well. So I would take a look at some of those external blog posts and presentations that people have done. And I guess the other thing is, in Search Console, you can't really track those featured snippets. So it's not something that we would pull out separately. Because from our point of view, we think they're kind of a normal search result. What you'd probably see is that the position of the page for individual queries kind of goes up and down there, where when it's in the featured snippet, it'll be at position one. And when it's not in a featured snippet, maybe it'll be further down in the search results page. So that's kind of anecdotally what you might see. All right, thank you. Sure. OK, let me see what other questions we have submitted. And we'll definitely have more time for more questions along the way. I'd like to know, what are the basics to avoid indexing URLs from pre-production or development environment and not have duplicate content issues? I know you're not the only one with this problem. I think pretty much everyone who has kind of a staging environment has run into this issue as well. So basically, the best approach to preventing any of your staging URLs from being indexed in Google is to put them behind some kind of a server-side block, which could be server-side authentication that you have to enter a password, username, or something to get to those pages. It could be by IP address if you have a specific IP range of your developers, your quality teams, those kind of things. That's essentially the best way to really kind of prevent that content from being indexed. There are two other approaches that people sometimes take. One is to use robots.txt to block crawling of those URLs. In general, that's OK too. For the most part, we wouldn't index those URLs if there are no links to those pages. However, what a very common mistake is that happens there is that people sometimes push the fully blocking robots.txt file to production as well. And then suddenly, your production server is also blocked. So that's kind of one of those areas where it is very easy to make mistakes. The other approach is sometimes to use a no-index meta tag on these pages. And that also prevents those pages from being indexed, which is kind of what you're looking for. But again, you also have the same problem that it's easy to take those no-index pages and push those to production. So by using something like server-side authentication, which generally is not a part of kind of the full set that you push to production, by doing that, you can kind of sidestep that. You can push something to production. And you can be really sure that if you can access your production pages without a password, then that it's actually working well. So that's kind of the recommendations I'd have there. How does, just as an aside, if those pages aren't linked from anywhere, surely then it's fine. You shouldn't get to them. You're not the deep state. How do you get to the pages if they're not linked from anywhere? If they're really not linked from anywhere, that's perfectly fine. But people leave traces and links in all kinds of weird places. So one of the weird places where we sometimes see our internal links pop up are all of these browser plugins that tell you which site is popular, which kind of users. And then suddenly your domain is listed there as this is one of the, I don't know, least popular websites because, of course, only the developers are visiting it. But it's listed somewhere. And then suddenly there's a link. And it's these kind of vague traces that are around the web, which search engines sometimes just stumble across and are like, oh, look, a new website. I will do my best. Right. Does it usually happen with people using off-the-shelf CMS stuff, then, if it's plugins or things that you install that you forget about? Yeah. I don't know so much about plugins and CMSs, but definitely from a browser side, that's something that we see a lot. All of those kind of popularity sites that track what the top websites are in the UK or top websites in different regions. And then it's easy to get somewhere on the bottom of the list there. And then it's there. I mean, another thing we sometimes see is when people send URLs by email to other people, and the other people end up being public mailing lists. And then suddenly some public mailing list has kind of this URL that you don't actually want to have indexed. And if it's a matter of just having the host name, then that's something that feels like it can just happen at some point. And you're saying Google reads email? Is that what you're saying? If they're on public mailing lists. Let's see. I have the next mistake. A page in my web hasn't got a main entity, and I don't know what to do. In fact, it's an old page, which I've tried to delete, but I don't get it. What can I do? So deleting a page is perfectly fine. So that's something where essentially, if you want to delete it and you can work out how to delete it in your CMS, if you're using something like Blogger or WordPress or whatever, then deleting it should be possible. In general, the main entity sounds like it's missing some kind of structured data on your page. And from our point of view, structured data is not a requirement for a page, for search. So if there's one page on your site that just doesn't have structured data on it or doesn't have that type of structured data on it, then I wouldn't necessarily worry about that. Does adding year to a post title help in rankings? For example, how to start a website or how to start a website in 2020? From our point of view, it doesn't help in ranking. It's not that we have any algorithms that look for the current year and say, oh, this is a recent article. We should show it higher in the search results. Sometimes users look at that. I sometimes find that a bit misleading because you see websites essentially update all of their year numbers at the end of the year. And then suddenly, it's like, what is the best cassette recorder from 2021? And then actually, there are no cassette recorders from 2021. But the website is updating their content like that, which feels a bit misleading to me. So I don't think our algorithms would penalize that. I definitely wouldn't favor that either of them. Why are news websites getting down day by day? Is there any update going on? As far as I know, news websites are not going down day by day. It almost seems like news websites are always in foreground when you look around because there's always some crazy news happening that pushes their content to be more and more relevant. So there is definitely no kind of anti-news website update or algorithm happening at the moment. Big publications artificially freshen their stories by messing with the timestamp on a very regular basis. In the search results and news, will you ever take action on such sites or only keep affecting small websites with algorithm updates? We do see this from time to time. And sometimes it does happen that this has a positive effect on a website. And that is something that we escalate to the team so that they can take a look at that. On the other hand, we also have fairly robust systems to recognize the more actual date on an article. So if we can find an article on the web and we can recognize when this was really published, and then even if they change the timestamp on their pages, it's not something that will skew our systems. It does feel like this is a bit kind of cat and mouse thing where we improve our systems, and then they find new sneaky ways to kind of get around this. But in general, I think we're kind of reasonably OK there. If you find situations where we're not catching on to this, feel free to let me know. Like send me some examples on Twitter, ideally screenshots where it's easy for us to confirm that something like this is happening. Because with a lot of these news things, if you just send us the URLs and we take a look a day later or we pass that on to the team, they take a look a day later, then it might look completely different by then. So with clear screenshots of examples where something is going wrong, that's really helpful. I'd like to know what would be the best technical approach, follow versus no follow index, whether it's no index, canonical, in the following case. We have two domains, A and B. Domain A is linking from the landing page to domain B's landing page, passing a parameter with the URL. Based on the parameter's landing page, at domain B is personalized with a different H2 heading, copy, et cetera. There's also a default version of the landing page at domain B available at the URL without parameters. So ultimately, this is up to you. It's whatever you would like to achieve there. It sounds like the home page of domain B without parameters is kind of one home page. And the home page of domain B with parameters or with those specific parameters is a different version. And from our point of view, you can do that. And you can have two different pages indexed. It doesn't matter so much that it's the home page with parameters or without parameters. But essentially, these are two different URLs from our point of view. So if these are two different URLs, then you can kind of decide on your side, do you want them indexed individually or do you just want one of them indexed? If you just want one indexed, then using the rel canonical to the version that you prefer to have indexed, that's essentially the right way to do it. But you can also decide to say, well, both of these are OK to have indexed. I don't mind if they're both indexed. It's essentially up to you. Let's see. I think we still have a bit of time. So I'll run through some more of the submitted questions. And we can get through some of the things from you all live as well. If my site is in Search Console verified long time ago with the WWW protocol in front, after a site redesign, I removed the WWW. Do you think this is a problem? Should I add another property in Search Console without it? Or should I redirect all my assets and pages to the WWW version? Just wondering if this is a big issue from an SEO perspective. So on the one hand, it's not a big issue to switch from one version to the other. But you need to make sure that you have redirects in place. So if you were previously at WWW, your site, whatever, and now it's at the same version without the WWW, you need to make sure that they're redirected from the old version to the new one. That's kind of the basic thing that you need to watch out for. If you don't do that, we will assume that the old version is kind of broken. And we will kind of let that disappear from Search while we recognize your new version and think, oh, this is a new website. We will kind of slowly integrate that into Search. So to prevent us from kind of losing all of the value that you've built up, make sure that you set up those redirects. That's kind of the basic thing there. With regards to Search Console, it is such that there are two ways that you can verify sites. One is on a domain level and one is on the prefix and protocol level where you would have to specify the WWW and not. So if you have it currently on the prefix version verified and you switch to a different prefix, you need to also verify that new prefix in Search Console. Usually, this is pretty straightforward. A nice way around that kind of for the future as well is to switch to domain verification, then you don't have to worry about that. Search Console is not a requirement for Search. So if you forget to do the verification in Search Console, your site will still work. It's not that something will break. Also, if you switch to a different Search Console verification, it's not that your data will disappear and you have to start over again. Search Console will recalculate the older data and show that to you there as well. So that's something, on the one hand, watch out for the redirects that you have those set up properly. And then I'd just recommend making sure that you see your data in Search Console with whatever version works for your site. If we have a 404 page to date and after a few days, we make it a 200 again and submit it in a site map again and internally link to them again in the site as well, how will Google treat these? Will this create a negative ranking impact when Google re-indexes these pages? Will Google report these URLs and submit it, not 404? What bad things can happen if we start using 404 in place of noindex? So from the early questions there, if you have a 404 page and you make it 200 and there's content there, then when we reprocess that page, we will take that into account. There is no penalty at all for that page being a 404 in the past. We just have a new page that we can index, and we'll try to index it the best we can. So on that point of view, it's not that there's any downside to doing this. The one thing to kind of keep in mind here, though, is that if a page has been 404 for a longer time, then we tend not to crawl it as frequently because we don't want to bother you with all of these requests for pages that you have always been telling us don't exist, then we probably won't check that page every day or so. It might be that we would check it maybe every other month or something like that. So if you have a page that you turn from a 404 to a 200, it can happen that it takes a bit of time for that to kind of get picked up again. So that's kind of one thing to watch out for there. And in particular, if you're changing a page from 404 to 200 and then back to 404 again and back to 200, back and forth a lot, then that's something where if we get into the state of, oh, this is a 404, we don't have to crawl it that often, then it can happen that we miss some of these fluctuations back and forth. So I would try to avoid the situation where you're going back and forth a lot. If you're doing this once, then that's less of an issue. The last question, I think, is what bad things can happen if we start using 404 in place of noindex? I don't see anything bad really happening there. Essentially, if you're telling us that these pages should not be shown in search, if they don't exist, then both of those options work, 404 or noindex. My suspicion is that maybe a noindex would be a little bit faster visible, but I think if you're looking at it from a bigger picture point of view, you probably wouldn't notice any difference there. So from that point of view, if you kind of have a choice between 404 and noindex, I would just pick whatever works best for you. Can duplicate pages due to uppercase URLs cause site performance downfall? Does keyword cannibalization exist? OK, those are kind of completely different things. On the one hand, the duplicate pages with uppercase URLs, we do treat URLs as being case-sensitive. So if you have URLs that have some uppercase characters in them or URLs that have lowercase characters in them, then we would treat those as being unique URLs. And we would try to crawl and index those individually. We would probably fairly quickly recognize if they're the same. If your server treats them as the same thing, we would see exactly the same content. And we would fairly quickly realize, oh, we can just focus on one of these. So essentially what you're doing is creating duplicate content in a technical way. I kind of call this technical duplicate content because it's not that you're duplicating content with the hope of kind of doing something sneaky. It's more that, well, your server is technically creating multiple URLs for the same pieces of content. Usually, we can work around that fairly well. So especially if you're a smaller site and you have this kind of duplicate content, then we can work around that. That's less of an issue. If it's a very large website where we struggle with crawling, even with one version of the content, and you suddenly have multiple versions with uppercase URLs in there, then that's a bit of a different question. So that's something where I would really focus on cleaning up those URLs, making sure that you're linking to a consistent version, that you use the rel canonical to your consistent version, and really make it so that we can crawl your website with the preferred URLs as efficiently as possible. So it feels like that's something that a lot of sites tend not to worry about so much anymore. In the past, it felt like a bigger problem, especially when you were making your own websites, when you're coding your own HTML, then it's very easy to kind of code those links yourself. And then some files are uppercase, lowercase, those kind of things. But if you're using a common CMS like WordPress or any of the other common CMSs, then usually that's something that's taking care of you. You link to a specific page, and that link automatically has the right upper lowercase version. Does keyword cannibalization exist? So keyword cannibalization, people usually call it when you have multiple pages on your website that are targeting the same keywords. Essentially, when you're making life harder for yourself, then it needs to be. And from that point of view, you could say it exists. I feel that the name makes it sound scarier than it really is. There's nothing mystical around it. It's essentially just you going to people and saying, well, you're searching for this really important term on my website, but it's like here are five options instead of here is one really strong option that I think you do. So it comes across to me more as kind of a marketing question of how do you present your website, and do you prefer to have one really strong option that you show to people, or do you have kind of a diluted version across multiple variations of the same thing on your website? Sometimes it makes sense to have multiple variations, especially if you know that people are searching for something with a generic term, and they're not quite sure what they actually mean. Sometimes it makes sense to really have one very strong version, where you really have something that can rank really well, because it's like you focus all of your energy into it. You present that within your site as the primary approach there. So those are the things that I would watch out for there. And let me just take one more question, and then we can switch to more questions from you all. My WordPress blog has date published and date modified properties. If I'm using date modified, why is my blog posts that are published in 2016 but have spent extensive hours updating in 2020, still showing the date of 2016 in search results? It's misleading to potential visitors, and I believe I'm losing clicks because of this. I also don't understand the logic behind the behavior. So I haven't had a chance to take a look at the exact URL here. But when it comes to dates, we look at multiple things on a page. I think we have a Help Center article on dates as well. One important thing is that we can confirm the date on the visible part of the text as well. So just updating the metadata in the structured data, if that's something that you're updating, isn't going to kind of sway our algorithms and understand that actually this is the right date to show. But we really need to have that confirmed individual part of the page. So if at all possible, make sure that that's something that you can confirm in a way that is easy to understand. And by easy to understand, I mean really listing the date on the page and not something as updated early 2020 kind of thing. It should really have the same date so that when our algorithms look at the page, usually what happens with regards to dates is we try to extract all the dates that we can find on the page. In the visible part, in the structured data. And then we try to judge which of these dates have support within the article, like which of these dates are mentioned multiple times, which of these dates seem to be relevant for the article, and then we try to narrow things down on those dates. So making it as easy as possible for us to confirm that a date is the right one is kind of what we should be looking for. OK, wow, still a bunch of things left, but let's switch over to two questions from you all, just for saying that. I have a question on Google News. That's OK. I don't know everything around Google News, but I'm happy to try. Sure. So there have been several complaints on the Publisher Center forum as well as Twitter about the sites that have got into the Google News after December 2019. I haven't really been showing up on the News tab if you search for them. And that includes larger publications like CNN, Brazil, India TV.in. And there are several websites like that. And none of them really show up without adding that site operator in the search keyword. If you look at the older websites which got into the news before December 2019, they does show up on the search without any site operator keyword. So what is exactly the difference between that Publisher old one and new one? And why is it that we have to search with the site operator on only those new websites? Because ideally, they should show up, even if you search with the right name, because it's been almost like 10, 11 months since December 2019. So there have been so many complaints on the Publisher Center about this. And we haven't really seen satisfactory answers out there. So we just wanted to clarify if this is something that publishers can be helped with. Yeah. So again, I'm not on the Google News team, but I have seen a lot of these complaints as well. And I spent a bit of time also chatting with Danny Sullivan about this to try to understand this a little bit more. My understanding is that this is something that we're working on. So it's not by design that there is this one cutoff date, but rather that things are being processed just a little bit slower than they used to. And the other aspect there, the site query, from my understanding, is that even if you're in the News tab and you do a site query for something that's not in Google News directly, and it'll default and show the data from Google Search. So it feels like, oh, it should be there, but it's not being shown for normal queries. But actually, what you're seeing is the normal search indexing and not what would actually be shown in Search. So I hope we can clean that up a little bit. I don't know what the time frame is there. And I don't know exactly what all will change there. But it does feel like something where things have been going a lot slower than they should be. Yeah, there is a bigger problem, actually, because of that. So many publishers, like they are switching the domains, which were in Google News before December 2019. And then they're kind of exploiting this thing to show up all those copied articles, scrapping links, and all that. They are appearing very easily, because they got approved before that time frame. So they just show up without any effort at all. Basically, there were a couple of examples given by the product expert in the Publisher Center thread also, saying there are a couple of websites which are really a virus link. So they are coming up on news, and it is kind of sad that the original publishers has to kind of suffer because of this. It would be great if you can take a look at that. Yeah, I mean, I'm not on the Google News team. So I just poke at them and tell them like they should do some more there. But one thing also to keep in mind is that there's abuse everywhere in Search. And it's something where, if you explicitly look for it, for some kinds of abuse, you will find it. So it's, I don't know, it feels kind of weird to say that we should be showing in Google Search. And look, I also found some abuse from existing publishers because then it almost that this tracks kind of the effort that could be put into your case to kind of improve things on your side by telling them, oh, actually, you should be cleaning things up first. So that's, I don't know, just from my personal point of view. But we are definitely pushing a little bit on the news, folks, to see if we can speed all of this up again. Thank you, John. That I am happy. Sorry, can I jump in? Sure, go for it. This is Brittany. I posted the first question that you read about the domain migration gone very, very wrong. And I have a team of 35, depending on me and depending on this website's traffic. So it's been just such a tough time for us. I have a few theories of what might have gone wrong. I was hoping to run at least one by you. But I know this is a very specific scenario. So if you'd prefer me, you'd prefer to just follow up with me after, that's fine too. I can jump in with one of them, sort of taking your lead on what's best in this forum. So usually what I do on Fridays is also just have a bit of time afterwards where people can ask questions kind of off the record, which makes it a little bit easier to look into some specifics as well. So if you want to hang around a little bit, we can take a look at that then. Would that work? I will do anything to get any next steps on the situation. So yes, thank you. Cool. Am I clearly here, Avel? Yes. Sir, I am asking that some websites which do not have any content just putting three or two keywords and are ranking in top stories, where as new publishers we produce high quality content but still not able to get on the first page. But they are ranking with just three keywords. I have a video proof also and posted in the Google News forum. They are saying that it is computer algorithm and we can't do anything. And putting a keyword off yours. What I would recommend doing there is using a spam report form if you think that this is really bad content or using the feedback form to let us know when you see bad results. But sir, still the big news publisher sites like CNN, Brazil, and India TV not able to get in the Google News. I think we talked about that previously. So I don't really have any updates there. I mean, one thing that I'd also just like to point out is that a lot of the sites that I looked at that were escalated in those threads are things where I am kind of, I don't know, a little bit reluctant to say that we should treat these as news sites. I'm not on the news publisher team, but there's a lot of, I don't know, weird things there. So that's another thing to keep in mind that just because things are submitted in the publisher center doesn't necessarily mean that we would show them in Google News. But like I mentioned before, we are definitely going to poke the Google News team a little bit to see if we can speed some things up there. OK, thank you, Seth. Hi, John. Hi. I want to ask you some question about the AMP page. This early day, my AMP page, it doesn't show up on HLP. I don't know what happened. And if you have any clue about this, because on the technical side, the AMP page, AMP HTML, and we have checked into the Search Console, the result is valid, the AMP page. But when I search on the Google mobile, that it doesn't show up. Why this happened, if you any clue? So you have a set up with the connected AMP page, just to confirm. So one HTML page or like, I don't know, are you afraid of it? Legacy HTML page and the AMP version of that page? Is that correct? Yeah, yeah, we have did that. OK, and you're looking in the mobile search results and just looking at the normal search results there, not any specific picture or ranking question. It's really just showing the HTML version and not the AMP version. Yeah, I think my question is, because of the AMP page, it doesn't show up on the Google search. I think the rank also dropped. Is that correct? I don't know. I mean, usually the switch between the normal HTML version and the AMP version is something that would happen without any ranking change. It's just we would show one version or the other version. So that wouldn't affect the visibility of the page at all. It's really just like which of the URLs is being shown. One thing you could do is make sure that in Search Console, you also can track those versions, depending on how you have AMP set up. If it's like a subdomain or a subdirectory, then make sure that you can also get that data in Search Console to see if there is anything specific that's flagged there. Also, in Search Console, there is the AMP report that can give you a little bit of information on issues with AMP pages that we find, which would result in us not showing the AMP page, but rather the normal HTML version. But again, all of this wouldn't change the ranking of the page. It's really just which version is being shown. I see. OK then. So I think if you're seeing a change in ranking as well, then that wouldn't be related to the AMP version or not. That would be more of a general ranking version or not. Sorry. In our Google Analytics, I have seen the slightly drop on the AMP traffic. So I think that also because that doesn't show up on the Google Search, I think also the rank is dropping. OK. Then that sounds more like a general ranking question. Then I wouldn't worry so much about the AMP page. I would think about what you could do to improve the ranking of your pages overall. I see. Still confused, but I don't know. I have checked on the AMP Paypal editor on the Google Search Console. They checked. The AMP Paypal page is valid, by the way. But I don't know. Still confused about it, but OK. Thanks for the answer. Sure. OK. Should I get one thing? Is it possible for you to organize might meet like this for the news team as well? Just a request from our side because there are a lot of open questions for these news publishers, guys. So at least for a bi-weekly or something like that. Like, what you are doing here? I am requesting from me. OK. Yeah. I'm requesting from me. OK, we have some people who are interested in news hangouts. OK, I don't know. I'll ask. Yeah. OK, thank you, Jeff. The best. Cool. Let's take a break here. I'll pause the recording. If any of you want to hang around afterwards to ask more questions or chat more, you're welcome to do that. In the meantime, anyone who's watching the recording, thank you for watching along. I hope you found this useful with some of the questions going back and forth. I thought lots of good stuff happening. I wish you all a great weekend in the meantime and hope to see you all again in one of the future hangouts. Bye, everyone. Take care, bye. Take care, bye.