 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a webmaster trends analyst here at Google in Switzerland. And part of what we do are these Office Hour Hangouts, where people can join in and ask their questions around SEO, the website, search, whatever. A whole bunch of stuff has been submitted already. But if any of you want to go ahead and ask the first question, you're welcome to jump on in. OK, I think I need a button with like a cricket sound. I think that would be useful here. But that's fine. We can also, it can be a bit annoying, maybe, and repost the question that I've posted before. I think Miha is smiling because he knows what I'm talking about. Do you remember we've been talking about this issue with weird indexing for a web website? I think it was now two and a half months ago. And I'm just trying to be annoying about it. Because I know you guys had some troubles in the last few weeks. And there was probably a bit of a different focus in the search team. But really, really help me out here. I need some help. I don't have anything new there. But I can check in later today with the team. I also don't want to be too bothersome, but yeah. Yeah, I realize when things don't work out, it's good to try to get some input there. That's fine. But if you give it seven years, eventually, you get the answer. So you hang in there. I hope it doesn't take seven years. No, usually, these kind of things, when I send them to the team, they're able to take care of them fairly quickly. So I'm kind of surprised that this is still sticking around. But I'll poke them a bit. It must be something difficult. I appreciate that. I don't know. Sometimes, they're weird things. Now, sure. All right, any other difficult questions before we jump in with the other difficult questions? No. OK. Hi, John. Hi. Hi. There is an issue that I am facing with Google Search Console. Like, we have a website in multiple languages. And we have a subdirectory system. We don't have subdomains for the other languages. So what I'm trying to do is add properties in Search Console as per the language. But I'm not able to do that because when I add a folder, it asks me to add different meta tags or verify this property as per the Google Analytics. You know, if we have Google Analytics, you can verify your property in the Search Console. But I'm not able to do that. So what could the issue I just wanted to confirm? OK, I don't know if there is anything unique with the way you would need to verify with Google Analytics. But with the other methods, if you have the main domain verified, you should be able to add subdirectories and subdomains right away. So that's something where, if you're seeing it's not working with the verification method that you have set up now, I would just try a different method. I know, for example, the domain level verification, if you do that with the DNS verification, then you can definitely just add all subdomains and all subdirectories without doing any special verification yet. So that might be an option to look into that. Sure, I'll try other methods. Sure, interesting. OK, let's see. The first question I have on top of my list is one I don't really have anything that I can help with because there's no additional information. It would be highly appreciated if you could help us out here. Our prime keywords have gone from the search results. These keywords were ranking on the first page for the past seven to eight years. Three of our sites have been impacted due to the syndrome. And we have used the same keywords in all three sites since we have similar products under different brands. We have done multiple on-page and off-page analysis to check and ensure whether we had any spamming issues. Unfortunately, the end results of all of our analysis did not trigger any signs of errors or warnings. All these keywords are showing impressions and clicks in Search Console. We've already raised this concern in the forum previously as well. Thank you in advance. So there's no information here that I can look at. So it's really impossible for me to say anything here. In general, just because a site was appearing well in search results for a number of years does not mean that it will continue to appear well in search results in the future. These kind of changes are essentially to be expected on the web. It's a very dynamic environment. On the one hand, things on the web change with the gear competitors, with other sites. On the other hand, things on our side change with our algorithms in Search. And finally, on the user side as well, the expectations change over time. So just because something performed well in the past doesn't mean it will continue to perform well in search in the future. So that's kind of the only thing I can really point out here. So my recommendation there would be to start a thread in the Webmaster Help Forum and include the details. Include the sites that you're talking about, the searches that you're talking about, maybe some screenshots if you're seeing something confusing in Search Console. But without any kinds of details, nobody can really help you. So that's kind of the direction I would head here. My question is regarding footer links. The company I work for is one of the biggest publishers in the country. They have many popular properties and are linking them together with exact match footer links and exact match brand name links. I want to change these into brand.com or brand-only links as to not get hit by any linking penalties. Can you elaborate on this and confirm or deny that no following or using only branded anchor links without keywords and footer is the right approach? So in general, if these are websites from the same company, then that's not something you would need to know follow. So that's kind of the main thing here I would add. I think going forward and linking mainly the brands might be an option there. It kind of depends on how much we're talking about here if this is like a handful of different websites that you're running at the same time. And that's, I think, totally non-issue in any way. On the other hand, if you're talking about hundreds of websites and you're all cross-linking these in the footer across all of your websites, then that's something that starts to look a lot more tricky, I guess. And that might be something where the web spam team might want to take a more detailed look at what is actually happening here. My suspicion is that it lies somewhere in between maybe, I don't know, 10 sites, something like that. And that's the order of magnitude where I'd say it's not going to make or break things in any particular way, depending on how you link there. I would just link naturally. So if people know these sites by brand name, link to them by brand name, I think that's a good approach. I don't think you need to know follow these sites. Two days ago, I saw a site in the top result which has been offline for about two years. Can you tell us the reason behind this? I have no idea what you're seeing in the search results. So that's really hard to say. We do keep URLs around in our index for a fairly long time. But if a website has been offline completely for a number of years, then that would be kind of unexpected from my point of view to show them in the search results. What might be happening is that maybe this URL is blocked by a robot's text, and we've never been able to re-crawl it. And we don't know what is actually behind that URL. So that might be something that's happening. But if a website is completely offline, if the domain has expired or whatever, then we would usually drop those fairly quickly from the search results. The old structured schema tool will now be replaced. Can Google improve the speed of the new tool? I've measured it once. The old tool gave results in four seconds, and the new one takes over 30 seconds. Yeah, it's like, what's up with speed as a ranking factor, Google? So I know the team is aware of some issues around speed. But I don't know how much of that can actively be resolved, because one of the differences between the old tool and the new tool is that the new tool essentially runs the pages through all of our indexing pipeline. So it runs through all of the processes that we would use for normal indexing to determine what's on the page and how we should treat them. And that's something that can sometimes take a little bit longer than if you just load a page from a server and do a quick analysis of it. So that's kind of one of the reasons why things are a little bit slower there. In general, though, when we look at the metrics overall, we see that the majority of the requests are handled really quickly, definitely not in the order of 30 seconds. So that's something where, depending on the site, sometimes maybe it will take a little bit longer, but for the most part, it should be fairly quickly. But I don't think it'll be in the same range as something that just has to load the HTML and then move on. I define the schema type in another language. Which language should I define for type, service type, et cetera? For example, Buchhalde in German versus Accountant in English. I don't know how that would be best handled. Usually our recommendation would be to use the same primary language on a page and to have one primary language per page. So you wouldn't have an entry for an accountant in German and the same page being the entry for the accountant in English. You would have one page in German and one page in English. And if you have one clear language per page, then you don't have that problem that you have like multiple languages that you want to use in the structured data on one page. You would essentially just use the primary language for the structured data on that page. OK, John, I'll just follow up on that. If I have a page with a mainly video content and there's in English, but I have foreign language transcription and foreign language subtitle, can I use the foreign language as the video schema markup, like the title, description, and things like that? Sure, sure. I think in that case, from a theoretical point of view, I would see that page as being in that foreign language. It has a video and the video is maybe in English. But essentially, you're targeting users in your own language. So that's something where I would use the markup for that particular language. I see. But a video markup is about a video, right? But if I use a title as the foreign language and description as foreign language, as long as I have the translated subtitle, it's OK? I think that's OK, yeah, because you're making kind of a video landing page for that. So essentially, someone is searching for something. They would go to your page to watch this video and you want the people who are searching in your language to land on that page, not necessarily someone who's searching in English to go to your landing page, right? OK, sounds great. Thank you. Sure. Which is the relationship between the core Google updates in organic search and Google Discover? I have some websites that did not receive any impact in the May 5 algorithm, but dropped massively in Discover. So we see Discover essentially as a part of search. It's, I don't know, search is probably the wrong word to use for Discover because you don't actively search for something, but we show something to you. But from our point of view, that kind of falls into the general systems of search, which means we use the normal crawling and indexing systems for that to we try to understand the content like we would do for search. And we also use a lot of the quality algorithms and a lot of the kind of understanding systems behind search when it comes to Discover as well. So it's very possible that some of the bigger algorithm updates that we make for search, they also have an effect in Discover because we kind of reuse those same algorithms across all of these different places. So that's something where maybe what you're seeing there is really an effect. I think it's always a bit tricky with Discover because it's more like supplemental traffic that you get to your website because people aren't actively searching for it, and it's really hard to kind of diagnose what is a problem if my website is not being shown automatically because there's no real way to determine, should it be shown at this point or should it not be shown? I own a website with multiple international subdomains. I have multiple language homepages for each subdomains, but there's a problem with my Canadian subdomain home page. In Search Console, it shows that it's indexed, and the canonical is what I chose it to be. But for some reason, the page is not showing in the search results. Other pages from the main subdomain are showing fine. It's really hard to see without any details. So that's something where maybe what would help here is to either have some details in the question with regards to maybe some queries or some URLs where you're seeing this problem or to post in the Webmaster Help forum to kind of get some input from other people as well. And for that as well, you'd need some queries or some URLs where you're seeing this problem. In particular, when it comes to a Canadian home page, my guess is that you might be using the same language content on the Canadian home page as maybe the American home page or maybe the French home page, depending on which language this page is in. And in general, when it comes to international websites that have the same language content, but they're four different countries, it's something where our systems sometimes try to simplify things and say, well, this content is actually the same. It's the same content in the same language, but maybe there's a country targeting aspect here. Maybe there are hreflang annotations here. But what can happen is that our indexing systems will fold these together and say, I will just index one of these versions. And that's something that happens very frequently, especially in the German area where we have Germany, Austria, and Switzerland. They all have German content. And it's very easy for our systems to look at some of these pages and say, well, all of these pages are the same. We will just index one of them. And in Search Console, what will happen then is we will focus on the canonical URL. We'll combine all of our signals there in Search Console and show that to you there. In the search results, if you have hreflang set up, then usually what will happen is we will show the individual URLs in the search results and guide the user to the appropriate local versions if we can figure all of that out. But the tricky part is in Search Console, you would not see what is actively being shown in the search results because we try to simplify it and show you the canonical. So that makes it very tricky to kind of debug and figure out what is actively happening here. There are two approaches that you could take there if that's happening in your case. I really don't know. You mentioned Canadians, so I'm just guessing French or English content. But there are two approaches that you could take here. On the one hand, you could just leave it be because probably if you look at the search results and you see the right URL is being shown, then the data in Search Console is confusing, but it's working out well. So maybe that's OK. Maybe that's not worth the extra trouble. The other approach is to make sure that these pages are significantly different. So if you have a Canadian and a US home page or a US website or product page or whatever, and they currently have exactly the same English content, then try to make sure that the Canadian page is significantly different from the US page so that when we look at those pages, we don't even consider folding them together. And if we don't fold them together, we will just index them as separate pages. We can still use the hreflang annotations between those pages, and that will also just kind of work out. The advantage of that is that in Search Console, you will have a little bit clearer data on what is actually being shown from which of the individual country URLs. So that's kind of the approach I would take here. It might be, though, that maybe you're seeing something completely different, something really weird and unique in your particular case, but it's hard to tell from your question. If that doesn't match, I would definitely start a thread in the Help Forum, because there are lots of people there who have experience with these things. And you're also welcome to just post again in one of the future Hangouts where we can kind of look into the question a little bit more detailed. Link attribute question. Hi, John. Hi. Yes. Hi, John. Just a follow-up question from the previous one, too. I think that one's facing the same issue. I've already raised it in the previous Hangouts as well, where I will ask for the examples. What is that? You also kind of published the question in the Help Forum as well. But I'm not finding any particular answer to that kind of problem. I mean, our page is in two different languages, and they are tabbing different countries. But I mean, still Google thinks they are kind of two games. So just wondering, what would be the possibility to help? I really can't understand you from kind of a sound point of view. But I think you're saying you have a similar problem to that. Yes, John. So I mean, I posted on the chat itself in our previous Hangouts where you asked for the examples of the website, wherein we have this website in Japan and Brazil. But Google somehow thinks Japan is a duplicate of Brazil. So I mean, the Japan homepage is kind of now the index but Google. And while searching for in the country, so far, a brand name. Residualization is gone. OK, can you maybe drop the URL again in the chat and add a short comment? Then I can pick that up afterwards. Sure, thank you. Sure. OK, link attribute question. If I have hundreds of outbound links on my site, which are affiliate links to large sites like Amazon, Booking, or eBay, and didn't qualify them with a rel nofollow or rel sponsored, does my site get penalized someday in the future? The background, it would be a big hassle to add the attribute to all existing links. Yeah, so I think there are a few things that kind of come together here. On the one hand, it would be nice to have the rel nofollow or rel sponsored on these affiliate links. However, for a lot of the more common kind of affiliate setups, we do have understanding of how these URLs look like. And we essentially treat them as nofollow links on our site already. So that's something where if you're using these common setups, these bigger country companies that have had affiliate setups for a longer period of time, then that's something where we already essentially treat those as nofollow links. So that's my guess here is, in your case, that's not necessarily the case. In general, going forward, I would still recommend using the rel nofollow on these kind of links just to make sure that you don't kind of rely on Google figuring out itself, because that's kind of the right approach here. In general, when it comes to links and maybe kind of links where you have some kind of relationship with the other websites, and that's the reason why those links are there, then what can happen is when the web spam team looks at that and sees that there's a significant problem here, that these links are causing problems with regards to the quality of our search results, and we need to take action to kind of ensure that the search results remain fairly reasonable, then what will happen there is that the web spam team will usually do a kind of a manual action on the website that has these links, and essentially treat those links as nofollow on our side. So that's kind of the primary step there, and then we would notify you in Search Console, and you could clean that up and do a reconsideration and all of that. So that's something where I think that might come into play a little bit more in situations where it's clearly kind of a problematic setup with regards to the links. When it comes to affiliate sites like these, we've seen these tons of times, and we can essentially do that our own. Our systems can nofollow these internally as well. So that's not something where I would lose any sleep over your kind of old affiliate sites that you have left over that are running. But going forward, I think it's a good practice just to make sure that you have everything lined up properly. From our point of view, it's also not a sign that you're doing anything problematic. Like, if we recognize that you have affiliate links on your site, that's not bad. It's not something that you need to hide. Essentially, that's just something we try to recognize so that we can treat these links appropriately. But it's not a sign that your website is of lower quality just because you earn money through links on the website. So that's something where traditionally when affiliate sites come to us and say, oh, they're not ranking well, and they think it's because they're an affiliate site, then usually the kinds of affiliate sites that are problematic on our site are the ones that essentially don't have a dot of really good content. So it's not so much the fact that there are affiliate links on the site, but rather that they're trying to take the easy approach and just copying your feed of products and putting them on their site or maybe spinning some descriptions and putting them on their site. And then it's really mostly a matter of the content, not the links. So from that point of view, you don't need to hide these affiliate links. You don't need to do anything special with them. We generally figure that out ourselves. Can you take a look at Google News, the sites that were approved after December 2019 and not ranking and not sharing in the Google News tab? Yeah, I don't know. I saw some threads on Twitter, and I think Danny Sullivan passed that on to the Google News team. I don't have any details there to share. So that's something where, generally speaking, these kind of issues are handled by the Google News team. It's definitely not the case that all new websites after 2019 are not shown in search. How things are handled in Google News, I don't know in detail. My question is about structured data. The template of product pages and our CMS consists of two parts, one on the user's PC, the second part on the user's mobile. So it's the same URL, just displayed none for the unused part. Part of the content is basically duplicated. So my question is, how should I use structured data for each displayed part two times or is one time enough? Yes, one time is enough. So you don't need to duplicate the structured data on a page. In general, I would recommend trying to use a more responsive layout so that you just have the content once. It makes it easier for maintenance. It makes it a little bit easier with regards to updating things on your side. So that's kind of the approach that I would take in the long run. But sometimes it takes a lot of work to shift from one kind of mobile interface to a different kind of mobile interface. So that's not necessarily a big problem. But for structured data, I would just include it once. You can theoretically include it twice. Our systems are not going to break if you do that twice. But the problem if you do that twice is that we need to make sure that we're really getting exactly the same information both of these times. And if you have it there twice, then the chance that something is not completely aligned is definitely exists. So I'd watch out for that. And the second part of the question is, how bad for SEO is this type of content? So template with display none for PC or mobile. Like I mentioned, it's something that some sites do when it comes to responsive design. We generally recommend just having one layout and using kind of more responsive elements on the page to kind of adjust the layout appropriately. But it's not something that I would say will break our system. So from that point of view, going forward when you kind of think about a redesign of your website, I would try to find a way to improve that. But it's not critical that you need to do that right away. Many sites, even regional newspapers, have lost their fabric on in mobile search a while ago. Mine as well, as far as I know, all abide by the published fabric on policies is something else to consider. I don't know. I need to have some examples. So the one example I have is someone who has kind of like a missing home page and a redirect. I pass that on to the team. But I'm not aware of anything where, generally, we dropped all fabric on. So if you have some examples, send them my way. For current events, older, well-linked pages turn up high, while fresh pages with relevant content from the same site rank lower. Not shown at all. Is there any way to address this? I guess sometimes a bit tricky, because as a site owner, you might say, well, this page is more important than another page. When it comes to SEO, it's not that you can provide any kind of preference or any kind of prioritization within your own website. So essentially, we will look at these pages. And if we have them in our index, we will try to rank them appropriately in the search results. It's not something where you can say, well, instead of this page, show that page, but still keep the other page index. So the kind of a strong approach is just to know index of pages that you don't want to have shown at all. But that's kind of taking a big hammer into this problem. The other thing that you can do, in general, with websites, is to make it clear overall, when we look at your website, which of these pages you think are really important. So one way that you can make it clear, which pages within your website are really important is with the internal linking of your website. So if there are pages that you think you'd like Google to treat with a little bit more value, then make sure that users, when they go to your website, recognize those as being important as well. So that could be something like you link to them from the home page. You link to them from a sidebar from various parts of your website. You kind of really position them prominently within your website. So that's kind of the approach that I would take there. If you really strongly feel that these pages are more important than these other pages, then make sure that that importance is really visible to people. Because there's no meta tag or no structured way where you can say this one should be more important than that one, you really have to show it through your website as well. Um, Google crawlers, slow caching issues after the Google bug update. Can you look into this? I don't know what this is referring to. So we did have some issues with regards to indexing, but that was unrelated to any Google crawling. So if you're seeing anything specific about crawling being slower, then it would be useful to maybe look at the website itself. I'm definitely not aware of us crawling less frequently from the web overall. So when it comes to crawling, there are a few things that can come together. And maybe that's something where your site is specifically being affected at the moment. We have a blog post, I think, from last year or maybe even longer, about crawl budget, where we talk a little bit about how we determine how much we would crawl from a website. And that's primarily based on the one hand on our demand. What we think is important to be indexed from the website. And on the other hand, based on kind of technical limitations on the website side with regards to how much we can crawl without causing problems. And sometimes when you see issues with regards to crawling, it kind of maps into one of those categories or both of those categories. So I would double check that blog post. It's about crawl budget and see how that may or may not apply to your website. I consult for an industry-leading brand. They publish the main yearly status report for that industry. They host the main conference for that with thousands of people attending each year. They have higher authority, as in DA, than any other player, more and better reviews in comparison sites and have pretty great content. Still, they don't seem to be able to rank on page one for non-branded keywords. And keywords tend to rank consistently in position 11 across the keywords we monitor, but also the keyword corpus for industry tools. Is the reason why a page would be banned from page one and push to page 11 without having a manual notice in Search Console? I'm not aware of anything in that regard. So as far as I know, we definitely don't have any algorithm or any kind of manual setup that would say your pages cannot rank on page one but need to rank on page two in position 11. So that's something where I'm not aware of anything like that. It's really hard to say without having specific information here, though. It's possible that something on our side is stuck or not happening as expected. It's also possible that maybe things elsewhere on the web are just also really good, and maybe it's really a competitive area. And just because you're really good doesn't mean that you're good enough to be shown on the first page of the search results. But it's definitely not the case that we would have any kind of limitation and say, oh, this website should never be shown on positions one through 10. Or at least I'm not aware of that. I think that would be kind of awkward to have. Because if we think a website is relevant enough to show on position 11, then it can sometimes be relevant enough to show on position two or three as well. So just in the interest of providing relevant results for users, I don't think it would make sense to push a website through page two for any particular reason. We have a huge website with millions of pages and don't want them all indexed. We have sitemaps that have an update frequency and priority attributes. However, we're in the process of changing these to be last modified and priority attributes. Does last modified instead of change frequency help Google to pick up new and fresh content better? So we don't use change frequency or priority in sitemaps. It's something that we looked into a lot in the beginning and we noticed over time that this is not providing any useful information to us. So within a sitemap file, we essentially look at the URL and the last modification date. Those are essentially the primary attributes there. There are some other things, for example, for images or videos that you can put in sitemaps as well. But essentially, we look at the URL and the last modification date. And the URL helps us to recognize new pages faster. The last modification date helps us to recognize pages that were updated recently. So with regards to recognizing when things change on a website, last modification date is really useful. With regards to just recognizing when pages are new, just listing the URL is good enough for us. So if you currently have the URL plus change frequency or priority attributes, then with the URL alone, we already can kind of pick up the newer pages quickly. So having last modification date there makes it useful when you change older content on a website regularly. But if you're just adding new content all the time, then probably the setup that you have is OK as well. Is the priority attribute actually used by Google for any crawl or ranking factor? Yeah, like I mentioned, I don't think we use it at all. I believe within the custom search engine setup, it might still be used. But at least for Google search, we don't use it at all. We just noticed over time that it really wasn't providing us with any additional new information. We noticed that some sites either set everything to the highest priority. And then that's not very useful. Other sites use kind of simple algorithms where they would set priority based on the directory structure of their website. And from our point of view, that's something we see already. So that's not also providing any additional information. So in the end, we decided to just not use priority at all. It's still a part of the sitemap specification. It doesn't cause problems if you use it. It's just that we haven't found it that useful when it comes to processing and using content in a sitemap file. Question related to Search Console. I'm not able to add my website in Search Console. The website is www.sg. It's indexing and crawling, but I can't create a Search Console account. It shows an internal error. I don't know. I'll double check with the team. That's kind of a unique domain name. So maybe our systems think that maybe something is missing there. But I'll double check with the team. Our website recently faced a fraud DMCA attack after which our content was taken out of the search results. The problem is the person who filed the complaint is the actual person who has copied the content from our website. We filed DMCA against their website, and they just filed back, bogus DMCA against us, which seems to be working. I want to know if Google checks for websites indexing history before taking action or not. Also, how can this ensure that only the person who wrote the content originally wins? If the content is backdated, which WordPress allows very easily, Google displays a backdate in the search results too. Does Google provide a service where they can identify when a content was originally published, or at least index so that one can find out which of the two content holds the copyright and which other one needs to be taken down? So these are really good and complicated questions. I think the general kind of theme around all of the DMCA is that this is a process that we're required by law to follow. And it's not something where we can kind of interpret the law in any particular way. Essentially, there's a fixed process in place here. We do see various people trying to abuse the DMCA systems and our lawyers and our systems are well-tuned to that. So that's something where it's not that we just blindly follow everything, but rather we do try to weed out all of these abusive uses of these systems. But in general, if you're seeing kind of this back and forth thing here with regards to DMCA, then that feels more like something that you would need to handle on a legal level and not something where Google would be able to resolve that for you. So that's something where it might be that Google's lawyers are able to help you with some of this, but for the most part, it's a normal legal process that we don't tend not to get involved with directly. With regards to having a service where Google could kind of identify the original source of the content, I think that's really tricky. And I doubt that we would have such a service. So as far as I know, we definitely don't have one at the moment. But it's really kind of a tricky situation, because what I saw way before joining Google, when I looked into this problem a little bit on our side as well outside of Google, is that spammers are often very technically savvy. And it's possible for them to get things sometimes indexed before the original content is indexed. So if Google were to just look at the date of indexing, then that's not necessarily the date when the content was first created. And if there were an incentive to get things indexed quickly to kind of take someone's ownership of the content, then that's something that I'm sure people would try to take advantage of. So that's, I think, just kind of as an aside. In general, though, all of these legal questions are things I can't directly help with, because I'm not your legal advisor, and I can't give you legal advice. So my recommendation there would be to get help from a lawyer on your side and to figure out an approach that you can take there. Because this is, in general, not something where it's a matter of going to a Search Console and clicking the right buttons, but rather where you really need to make sure that you're following the appropriate legal steps so that you can resolve this problem. So if I can follow up on that, so Google can't tell from legal perspective if that's original content. But from ranking point of view, do I need to worry a website's grabbing my content? Or Google can't figure out I published it earlier if I actually do publish it earlier. So they will actually rank higher because I published it first. We have a number of systems that try to figure out which version of the content we should be showing in the search results. So that's something where sometimes it comes down to things like that. But it's not always just that. So one scenario that we see from our side all the time is that we will publish something on our blog, on the Webmaster Central blog, and then it'll be indexed there first because we publish it first. But someone else will take that content and sometimes copy it completely and add some additional information. And then from a search point of view, we're kind of in that tricky situation in that if someone is searching for that content, should we show the original source? Assuming we know that this is the original source or should we show that other article which adds some additional information to it? And from that point of view, sometimes it really makes sense to show that other article because it actually does have some additional information. So even if we were able to map perfectly, this article or this piece of content was first published here and probably is this person's copyright, doesn't mean that we would always rank that version of the content first when someone is searching for the content. So it's a kind of a tricky thing. In general, the one place where this kind of falls into a little bit of a clear bucket is when we can recognize that a whole website is essentially just copying content from other people, then that's something that's a lot easier for our systems to say, well, there's actually no real value in showing this website because they're not adding new value. They're always just copying things from other sites. We might as well just show the other sites instead. So that's kind of the clearer situation. Everything else is kind of a vague, tricky, tricky realm of which one of these versions is really the better one to show to users. Thank you. John, just as an aside, do you know how many DMCA requests are filed every day? Is it a big number or is it just a happy number? I think it's a big number. I don't know what I'm doing. I think it's a big number. So I mean, we have a video on some of the DMCA things from Google official side. But so one of the reasons I think it's a big number is that some of the bigger content providers do these things automatically, where they kind of automatically crawl the web on their own and figure out, oh, these people are copying our content. And it's clear that they're copying our content. They just filed DMCA complaint automatically. So that's something where it ends up with a really large number fairly quickly. Oh, I think we also have in the transparency dashboard, we have the DMCA complaints as well. So you could probably look up kind of the largest submitters there as well. That might be interesting. Yeah, thanks. It's complicated because on the one hand, I know we would like to help people more with this process. But because it's so kind of legally tricky, it is very hard to do that without ending up giving people legal advice. And that's something that we really can't do. All right, so I don't know, let's maybe move on to some more live questions from your side if there is anything. Hey, John, I have a question. So since 2017, Google started a ranked page with intrusive pop-up lower. But they are an exception, like cookie consent, like age verification, things like that. So my question is, what if I slip some promotional information in my cookie consent pop-up? Our system is trying to recognize these kind of legal overlays. And that's something where if it's not clearly something that we would consider kind of this kind of legal overlay, then we might assume that that's something that's more part of your content. So that's, I think, the tricky part there. I see. So how much is considered intrusive? What if I want to promote something using my pop-up, but they only take up like 15% of the screen and did not overlay all the content? Is that OK? We don't have a specific number. So from that point of view, it's hard to say. In practice, if it's a small part of the page, then generally that's OK. So with regards to the intrusive pop-ups, the ones that we primarily see as being problematic are really where you take the majority of the screen and you just use that for a pop-up, where when people go to your pages, they're not able to get to your actual content. That's what we would consider problematic. So generally, if the pop-up don't interfere with the main content, you'd probably OK. Yeah, yeah. OK. Thanks. All right. More questions from any of you. Hi, John. Hi. This is Bapoon from India. I just wanted to know if Google News Indexings will be solved soon. There are some issues regarding the indexing on the new algo. I saw some tweets around that, but I don't have any insight on that. Actually, it started after the last December upgrade when the BRT algo was introduced. Chain. Chain either. Chain. Yeah. I don't have any insight on what has been happening with the Google News side. Sorry. Can you pass it to the team? We did pass it to the team to take a look, yeah. Thanks, John. Thanks. Sure. Hi, John. So one of my colleagues showed me a Romanian query for a medical term where one of the top five results shows a website that has an article on that topic, but there's a lot of links towards Viagra and other things like that. It also seems to show up in the cache. So it looks like Google has already crawled it, but for some reason the anti-spam feature hasn't kicked in, so to speak. Here's the actual search. And this is the site that, at least for me, shows up in position four. So it looks like they might have been hacked recently. I'm just not sure how much it takes for Google to kind of notice that and see, oops. There are a lot of links to spammy places, so. Yeah. I don't know if you're seeing that site I'll take a look afterwards. OK. Yeah. But I mean, we do have systems that try to recognize when sites get hacked. I think it's, I mean, I don't know the situation of this particular site, but in particular it's a lot easier for us to recognize these kind of things when there are significant changes on the site. Like when a website is a school website and suddenly has a lot of pharmaceutical content on it, then that's something where it's like, well, probably something went wrong here. But if a website was already kind of in a medical area and suddenly has pharmaceutical content on it, then that could be something that's a little bit more of a natural progression there. That might be a little bit trickier for our systems to pick up on. So that's something where sometimes we can pick it up fairly quickly and fairly well, and sometimes we need to improve things overall. Right, I was just asking, since this is a medical topic, I know that Google kind of tries to take a bit of a stronger stance in terms of the quality of content and relevance, things like that. Yeah, yeah. I don't know. I'll take a look and pass that on to our team. Cool. Cool. All right, we're kind of out of time. Hi, John. Hi. Yeah. Bappan again. I just wanted to know, I had an old domain which I have begun on websites this year. So it is crawled by the Google desktop crawler. So I wanted to know if it will move to the Google mobile past indexing soon. The website is responsive and AMP-enabled. So our systems use a number of factors to determine when we should switch the website over to mobile first indexing. And usually that's something that happens automatically over time, or we will reprocess the websites that we currently don't have in mobile first indexing and double check that they're actually OK. But in general, it's not something where you would have any kind of ranking or indexing advantage if you were switched over to mobile first indexing. So it's not that you kind of need to do that as quickly as possible or not. It's really just, from a technical point of view, we will switch over to the mobile crawler. And if everything is OK on your website, it will continue to work as it had before. So it's not that we would say your website is bad if it's not in the mobile first index yet. All right. Actually, from September onwards, all websites will be moved to mobile first indexing. That's where we're at. Yes, we initially said in September we would shift the rest of the web over to mobile first indexing. But we decided to move that date backwards to end of March to give sites a little bit more time if there are specific issues that you need to resolve on your website. All right, thanks. Sure. All right. With that, maybe let's take a break here with the recording. I still have a little bit more time if any of you want to hang around afterwards. But let's maybe pause the recording here. And in the meantime, anyone who wishes to go, it's like I wish you all a great weekend. Thank you all for sticking around and listening in. Thanks for all the good questions. And hopefully, I'll see you all again one of the future Hangouts. Bye.