 All right. Welcome, everyone, to today's Webmaster Central Office Hours Hangout. My name is John Mueller. I am a webmaster trends analyst here at Google in Switzerland. And part of what we do are these Office Hour Hangouts that webmasters and publishers with all kinds of search-related questions. We've already chatted a little bit. But if any of you want to get started with the first question, feel free to jump on in. No? If no one has one, can I? I'm curious about the object recognition within images of what Google sees. So for a new site, if we use regular press images that a lot of new sites publish on the same article, would we do you think, because this might be more valuable to you if the readers see a unique photo that Google don't recognize and other articles of the same topic, what is your take on that? I think having a unique photo is definitely a good idea. Because if it's the same photo that's reused across a number of different articles, we'll pick one of those articles for image search to show as the landing page for that. So you're in the same group with everyone else if it's the same photo. Whereas if it's a different photo, then we can show it separately in image search. But that's specific to image search. It's not the case that if you have good images that they will make your site rank better in web search. So it's kind of separate there. But that's something where sometimes good images show up as well in the normal search results, like where you have the images one bar on top or something like that. So I think if you have a chance to have your own images, I think that's definitely worthwhile. OK, great. Thanks. And I guess with regards to object recognition, one of the things there, I would definitely make sure that you tell us what these images are about as clearly as possible. So with the alt text, with the caption, all of the usual things. OK, great. Thanks. Sure. All right, let me see what all made it to the submissions. Looks like there are not a ton of them that were submitted, which is fine as well. I give those who don't come to these regularly a little bit of a chance to get visible. I see in my search console that many of my blog pages have been crawled but not indexed since the last major update. These are a lot of index pages which list different articles and an archive index, a portion of our how-to articles. Should we change these archive URLs to be follow no index or change the structure of the blog, split it into sections with relevant articles as opposed to just the blog section? So I think first off, it's important to know that this situation where we've seen URLs and we just don't index them, that's completely normal. That's something that happens to pretty much all websites where we can find a lot of different URLs on a website if we crawl deep enough. But that doesn't mean that these URLs are necessarily useful with regards to search. So that's something where we might crawl these, we might look at them once and say, oh, this is interesting. But then maybe we don't keep them in search for the long run. So in particular, things like index pages or archive pages where actually the content is already indexed and it's just a matter of a page that links to this content, that's something we don't necessarily need to have indexed. So we might crawl but not index them. In the new search console, this is a little bit more visible. So that's something where suddenly people are seeing this and we're wondering, oh, well, how do I fix this? But it's really not something you need to fix. It's simply normal when it comes to search. We just don't index everything that we've seen. Can I use non-English language in the image geolocation tag and image type map? Yes, you can use non-English language there. What I would do as a kind of a rough approximation if it works is just try that text in Google Maps. And if that text in Google Maps points at a specific location, then you can be pretty sure that we can figure out what that location is. That said, I don't know how much weight we would give the geolocation tag in image site maps. That's something that I rarely see sites use. So it's very possible that our algorithms don't actually rely on that too much. But if you can specify it, why not? Need help with a structured data issue? We added our phone number in structured data for organization and customer service. However, it was the incorrect phone number. Oh, no. We've since updated it. But how long does it take for Google to make this change in the structured data? So if you've made an update in the structured data there, then that's essentially the right thing to do. One thing you can do is use the submit to indexing feature, either in touches Google or, I believe, in the inspect URL tool to let us know that this page has changed and that we should reprocess it. However, things that are more of a secondary nature from the content, so not specifically or not immediately tied to the indexing of the text on the page, the title, the URL, those kind of things, those sometimes take a bit of time to get bubbled up again. So if we're showing this phone number in maybe a knowledge panel on the side, or sometimes if it's an image or something that's embedded within a page, then all of these things take a little bit longer than normal recrawling of the HTML page. We kind of have to let all of those pipelines run as well. And sometimes that can take a week or so. So I would definitely give that enough time. Can you? Hi, John. All right. I hear one voice that. Hi. So I have a question regarding this structure data. So I've seen a lot of websites actually putting all these reviews from the product pages and show it over 30 pages. So just wondering if it's a kind of right approach or is it kind of bad practice to add schemers and then further, it gets all those star ratings and the reviews in the search user. So is it fine, or we can do that? In general, that's fine. If you watch out for the policies that we have for structured data for the rich results, then I would go for that. That sounds good. Sure. Thank you. All right. Someone else? OK. Yes. Can you hear me? Sure. OK, excellent. I think my question goes into structured data as well. We've got a couple of TV shows that are being displayed on a couple of brands of ours. And unfortunately, the panel gets mixed up. And so sometimes it just displays a wrong brand for the wrong logo or the wrong link to the wrong site of our TV show. So I was wondering how we can take charge of that and how we can manually or if there's a possibility. Because we have actually done our best in the structured data to take care of it, but it never changed. And where are you seeing this? Is it the logo image? Yes, I put a link down here in the description in the comments box. You can see there's a screenshot. Let me find that. Oh, I don't have permission. Oh, no. Shoot, hold on. OK, I'll come back to that. Yeah, OK, I'll just give it to you. Yeah. So usually if it's something that's in the Knowledge Graph, I think as well, it kind of goes into the same direction and that it just takes a little bit longer to be updated. However, if you've made those changes for a while now, then maybe that's something that we just need to pass on to the team as well. For some of these kind of informational things that we show in the Knowledge Panel, it's also a matter of making sure that everything aligns. So for example, if you have a Wikipedia page on these TV shows or on these brands, make sure that the correct logo is linked there as well so that we can really see that everything kind of matches together and we can trust that information that's provided. Yeah, consistency, but we took care of that actually, but it never changed. OK. Might be because we've got the same show on different brands, but somehow it got tangled up. That can make it trickier. Yeah, if we can't find this kind of one-to-one mapping between these different items, then maybe we'll show the wrong one or maybe we'll assume that all of these brands are actually the same, that we can use them interchangeably, which I don't know. Sometimes it's the case, but maybe not in your case. You should be able to watch it now. OK, let's see. OK, I don't know. I'd have to double check. Thank you. Good question. All right, let's see. What else we have here is voice now one of the ranking factors. Should we rely on it or simply rely more on speed and mobility of a website? So I don't know how we would make voice a ranking factor, so that's one part. I think over time people will use voice more and more to search. And that's something to I think I would try to watch out for that, but it's not something where at the moment I'd say you have to do something specific for voice search. For the most part, completely normal websites, if we can understand their content for other things in search, we should be able to understand the content for voice search as well. And depending on the type of query that is given by voice search, maybe it makes sense to show a website. Maybe it makes sense to show an answer, all of those different things. So I wouldn't see voice as being a ranking factor of its own at the moment. As a follow-up with regards to the page gate tweet, that one, let me double check. Oh, with regards to like a kind of pop-up that you can do if you need to do it before checking in the human's age, I think for the most part, I think that's still a good approach to take. The important thing with all of these kind of legal interstitials is that Googlebot is not able to click through to them. So if you're doing something where you're redirecting to another page for this interstitial and then going back to your primary content afterwards, setting a cookie maybe, if the user enters a date that's old enough, that wouldn't be something that Googlebot would be able to crawl through. So with regards to that, if you had to set up this configuration where you redirect to a different page and then use us to click and come back with a cookie to the main content, then I would assume we would not be able to index any of that content, which is probably not what you're trying to do. There might be situations where you say, well, this is the only way I can do it. So maybe you have to take that into account. But in general, if you do want to have that content index, you need to present that age interstitial in a way that Googlebot can still crawl the normal content. So that should be something like an HTML div that you overlay on a page, keeping the rest of the page essentially still loaded so that Googlebot can still see that. I imagine that's probably the best approach at the moment. You could also do something with JavaScript to use JavaScript to load an interstitial, which, from our point of view, should work fine as well. Again, provided that the normal content is still in the HTML afterwards. If you need to do a type of interstitial that really goes to a different page and that doesn't load any of the normal content in between, then what you might want to do is say, well, this content won't be indexed, but maybe I can make a simpler version of this content in a way that doesn't need to be behind an age interstitial. So perhaps you have more of a descriptive page of the type of services that you're offering. You're saying, well, this is content I can show to everyone, and I can get indexed. And from there, users can click and go to my other content, which is behind an age interstitial or behind maybe a country interstitial or whatever you need to do there. So those would be the primary recommendations that we would have. All right, quality score metrics. The quality score metrics on Google PPC show, above average, landing page experience notification and click-through experience. Can this be used as an indication that our landing page for particular sets of queries versus landing pages is already optimized for SEO or for Googlebot? So the simple answer here is that the ad-related tools through the AdWords landing page test the quality scores that you have there, they're not related to SEO. These are completely different systems on our side. And from an ad's point of view, we might say that something is fine. And from a search point of view, we might say, well, we don't think this content is actually relevant. So those are completely different things. Sometimes the ad's landing page score can give you some information about things that you can improve. But they're essentially completely different. There is something that you need to take account in completely different ways. And I think then the second part of your question essentially says that's OK, yeah, because these are different length, these are different box on the one hand. But it's not so much that they're different box. It's really more that they're completely different systems on our side. And like you mentioned, you can easily have an ad's landing page that's set to no index, which would be perfectly fine for an ad's landing page. But obviously, not indexable at all from an SEO point of view. When a person writes an article, does it help the page to rank better to link to another site on the article? For example, if I write about adjustable beds on my site, I link to a manufacturer site to help the pages ranking in Google or help the understanding of the topic a little bit better. It does help us to understand the context of the page a little bit better, but it doesn't really affect its ranking in the ad. So it's not the case that if you link to a handful of good sites, that suddenly your pages will rank higher because we think, oh, your site must be good as well. This is completely, I don't know, quite a popular tactic that used to be done quite a bit, maybe 15 years ago, where people would put up this completely spammy page. And on the bottom, they would link to CNN, or Yahoo, or Google, or something. And assume that because there's a link to these legitimate sites, suddenly this spammy content would be more worthwhile to search engines. And at least at the moment, that's not the case. So if you write content, your content should be able to stand on its own. The links definitely help users. So if you link to other content that you think is relevant for the user at this point, for maybe the problem or the issue they're trying to solve, that's a good thing. With regards to follow and no follow, use your normal techniques there. So if it's a paid link that you're putting there because of maybe a relationship there, then obviously use no follow. But otherwise, feel free to use a normal followed link for something that's normal content that you're just referring to. So from my point of view, I think the primary value there is really for the user, in that the user comes to this page and they see a comprehensive view of this topic. They have more information they can follow up on if they want to, but essentially they have everything that they need there. I have a question about embedding third party reviews on our website, such as reviews from Google, Yelp, Facebook. A third party provider offers this bit of JavaScript-based script that you can put on your site and it'll display the live reviews from the entities above. However, when you look at the page source, the reviews are not including the page source only in the JavaScript. What is Google's view on this? Is this acceptable or will Google frown upon it? So from a purely search point of view, it's fine to have reviews on your website that come from some other places. We do render pages to process some of the JavaScript. I don't know if we'd be able to process this particular JavaScript. You can test that using the mobile friendly tool, for example, to see a rendering of your page. And you can also double check the HTML that's generated from rendering there. So that's one way to test it. The one thing you need to watch out for here is that these reviews should not be marked up for structured data. So these should not be embedded in the way that normal reviews of products on your site might be. It should really just be something that's kind of content on your site. Because if you're using third party reviews, that's something that, as far as I know, would go against our structured data guidelines. So embed them if you'd like to see them on your pages but make sure that they don't generate any structured data. I change the modifier to a rate of 2 per second. I'm hard to understand the English in the question here. So I think it goes into the direction of the general crawl rate, indexing speed of a website. So I think there are two sides here that are always relevant with regards to a new website or any website in general when it comes to the crawl speed. On the one hand, we try to limit the speed to avoid causing any problems on a website. So if we crawl too fast, then we might slow your server down, which doesn't help you either. Or you might even cause problems on your server That's something that you can also limit with the crawl rate setting in Search Console. So the crawl rate setting there is specifically about the maximum crawling you can do. It does not mean that we would crawl that much. It's just if we wanted to crawl a lot from your website, we need to make sure that we're not going above those limits. The other aspect that comes into play here is kind of like we talked about in the beginning with regards to crawling and indexing in that Google doesn't index everything that it crawls. And we don't crawl everything that we've seen links to. So just because we've seen a website like this, maybe we've seen a sci-fi file. Maybe you're writing a lot of articles. It doesn't necessarily mean that we would crawl and index that content that quickly. So that's something that sometimes just takes a while to kind of build up that trust, essentially, with Googlebot that Googlebot knows that when something new comes out on your website, it has to go and crawl and index that as quickly as possible. And sometimes that takes a while. So that's something that, especially if you have a new website and you're generating lots of content, that might be playing a role there as well. So on the one hand, we have the maximum crawling that we do more based on a technical thing. And on the other hand, we have the maximum crawling that we would like to do, which is more based on whether or not we think it makes sense to spend a lot of time crawling and indexing content from your website. So both of those you can influence in different ways. John, I have a question if you don't mind in regards to the last two questions, Java and indexing. I just pasted the URL into the chat. It's a new partner we're working with in Australia. And it's a relatively new site. And it's built with React Java. And they're having a lot of indexing issues. And I just wondered if you mind taking a quick look and seeing if it's just because it's new and in Java, or whether there's another problem with it. Oh, man. There it is. Let's make this pop up. That's terrible. Oh, you did? I think that shouldn't cause any problems for search. No, no, but I didn't realize there was one. Oh, the flags that first, OK. I don't know. I'd have to take a look. Maybe we have more time towards the end. OK. And I can take a look at that. You've got more questions, yeah. Yeah, I think in general, with regards to sites that are purely built in JavaScript, if it's like a pure React-based site that doesn't do any pre-rendering. Then the thing to keep in mind is that all of this rendering takes a little bit longer to be done. So what can happen is we'll crawl the HTML. And then we'll see, oh, we need to do rendering here. And then we'll put it in our list of things to render. And then it might take a couple of days, maybe even a week or so, for us to actually render the content. So if you have a lot of items on the site that need to be crawled separately, then that takes a lot of time. If you have content there that needs to be updated quickly, maybe new things that are coming in, old things dropping out fairly quickly, then that is probably something you'd want to do more in the direction of pre-rendering or using dynamic rendering to give us the static HTML version of those pages. OK. But I can take a quick look afterwards. Let's see. One more question. OK, back to work. Close. I have a website with its VM in India, not sure what VM. Will Google crawl the site from India or from North America? Does crawling from North America take more time than from India? Google is not indexing my new site maps. It shows results from the old website, which are 301 redirected, which I've removed using directory removal and search console. There is no update about the index coverage of my new site map. It's an Angular universal site. So I guess there are different parts here that we can take a quick look at. We do primarily crawl websites from the US. At least the IP addresses that Google Cloud uses tend to map back to the US in most of these kind of lookup tools. Obviously, that's something that's arbitrary in that these tools have to make some assumptions there, because IP addresses are hard to map to exact locations. So sometimes these tools also gets wrong. But primarily, we do crawl from IP addresses that are based in the US. That means if you're doing anything special on your website that would be different for users in India from compared to users in the US, then that's something that might be what we're looking into with regards to how does Google actually index this content. You can test that using the mobile-friendly test, like we mentioned before. The mobile-friendly test essentially crawls with the normal mobile Googlebot and shows the page how it would look from that point of view. So if you're doing anything different, where when you check from India directly and you compare it to what Googlebot sees, then just keep in mind that Googlebot will index the version that it sees. It doesn't know to crawl your website from India. And as far as I know, we don't have any Googlebot IP addresses that map back to India. Hey, Sir. Yeah. Excuse me. Yeah. I am the one who posted the question. I have a question. If it is crawling from US, will that take more time? My page speed might go down. So will that be an issue? Usually that isn't an issue. For crawling itself, the differences from having to crawl from the US to India are minimal. With regards to understanding how fast the page is, we take different things into account, including lab data, where we artificially measure the speed that this page would take, and also field data, where we see the users that are going to your pages. What are the speeds that they're seeing? So if your pages are pretty fast in general, and users in India are seeing your pages as being pre-fast, then that's pretty much a good idea. So that's not something where the crawling location of Googlebot would play a role in there. Hi, John. Oh, sorry. I guess the other part of the question was around site maps and indexing and crawling. I think we've talked about that a little bit. The one thing I do want to mention is the move from the old website to the new website that you mentioned there briefly, and that you redirected from the old site to the new site and then did a directory removal in Search Console. I would not use the removal tools for any time when you're moving a website. Because what happens there is the removal tools don't change indexing at all. They only affect what is shown in Search. So if your old website is indexed currently and it's redirecting to your new website and you use the removal tools, then essentially your old site disappears. And until we've actually indexed the new site, understood all of those redirects, then we wouldn't be showing anything. Because you're telling us not to show the old site, but we don't have the new site index yet. So we don't have anything to show. So if you're doing a site move or if you're moving part of your website, set up the redirects and let them take their time. So they'll be seen at some point. They'll be followed at some point. It's not something that you would need to force. And using the removal tools would not make the site move go in fast. So if you still have those removals running, then I would personally, I would cancel them and prefer to have your old URLs index, which redirects so that users get to your new one anyway, rather than not having anything showing it all in search. But we need to remove some searches from Google. Like that pre-noted ones are not a good quality content. OK. I mean, if you want to remove the content from Google, that's perfectly fine. But if you just want to move it to a different URL, then I would not use the removal tools. Because we are using Angular Universal website, will it take more time to index? It depends on how you set that up. With Angular Universal, you can also make it that you have the static HTML version that you serve to search in your crawling, which we call dynamic rendering. And if you have that set up, then we can crawl and index those pages just the same as any normal static HTML page. In our old version, we have submitted more than 1,000,000 URLs, but only 3,000 has been indexed, even in Angular Universal. May I know the reason why it might not be indexed? Yeah, I think that probably comes back to the general question of crawling and indexing. We might know a lot about a lot of URLs, but if we're not sure about the quality and how relevant they are for users, then maybe we won't index them all. So that's something where I wouldn't focus too much on the technical side of things, if you're sure that they can work in search. And I'd rather try to improve the quality overall from the website. Thank you. Hi, John. Hi. I have a question, because we are facing a new unique situation for one of our projects. So one of our clients sell a lot of products online. Now, one of the product categories, they have a lot of products under that category, and they want to create a separate website for that product category. Now, their current website has some rank for that product category. So what will be the best option is 301 redirect or the canonical tag. Because we are trying to implement canonical tag instead of 301 redirection to hold the ranking so that the ranking pass to the new website, new domain. So which one will be better? I think from a ranking point of view, they're probably about the same in the end. And it's something where if you're splitting a website, then you can't necessarily assume that the new URLs on a different website will rank the same as the previous ones. So that's always a bit tricky. If you move a whole website from one domain to another, then we can move those signals fairly directly. But if you're taking one website, and you're saying, well, this small part here is now a separate website, then that means we have to reevaluate those websites individually. So we'll follow those redirects, but we don't have a way of saying, well, this ranked like this, therefore, this small part of the website will rank in the same way. It's possible that it'll rank similarly. Maybe it'll rank a little bit better, a little bit worse. But in generally, it's not expected that it would rank exactly the same way if you're splitting a website or if you're merging a website. All right. Wow, we managed to make it all the way through, and we still have time left. So here's your chance. Ask your question. So hi, John. Hi. Whose turn is it? My part. OK, so after the recent updates, I'm seeing more like clickbait type of titles in the SERPs. And for example, an authority website like the Washington Post, cnn.com, et cetera, they wouldn't use all caps words in the titles. They wouldn't use some kind of hyped up titles and exaggerations and things that go more towards the gossip kind of type of sites. So isn't this a good racking factor if you try to revert back things if they changed regarding the rules for the capitalization of the words in the titles and regarding the, because they are grammar rules which are different for the different languages. And now I'm seeing more like, how can I say, improper titles, non-professional titles, things that are more towards what the content for farms are doing. So if you imply these rules for proper grammar, for proper capitalization, for proper, for non-all-caps words there, for not repetitive exclamation marks, et cetera, all these low-quality websites wouldn't rank on page one. And we'll actually see more authority websites there. And there is another thing there. So you say that you don't use usage metrics, but people tend to click more on these clickbait type of articles. And is it possible that Google will start ranking them more? And just because something is being used more, clicked more, and shared more, because it's hyped up, it's not a sign of quality. So we have these kind of words if we remove the all-caps titles, et cetera, if it's a bit more professional, we will have better quality authority websites racking on page one. Yeah, I think that's always a tricky balance with regards to what is relevant and what users tend to click on, and where it's hard to tell if something is actually useful or not. I don't know about the specific kind of clickbait titles. I'm pretty sure we look into that to some extent as well. But specifically around sites that attract a lot of clicks, I think we've called these click magnets. And it's something where we do see that if we were to rely on something like user metrics, then these would suddenly be very visible. But that doesn't mean that they're very good. It might be that you have some really funny joke, and it's in the title, and everyone is like, oh, I want to see the answer to this joke. And it's not that they want to see this answer because it's relevant or because it's useful, but it's like, oh, this is funny. I want to be kind of entertained. And that's something if we were to rely on user signals for search like that, then those would suddenly be everywhere. And I don't think that would be a good situation because it doesn't really help people with understanding what they're looking for, with relevance with regards to the queries that they send, or something that we're finding or that were to attract a lot of people. So that's definitely something that the team does look into. And we do worry about that a little bit. And that's something you sometimes see on social media sites like Reddit, where you'll see something really funny is always ranking number one. But that doesn't mean it's actually good content. It's just you look at it funny. Yes, exactly funny. Because Google Search shouldn't be the only entertainment part. It's not YouTube where you can put all these clickbites kind of things there. Yeah, I believe YouTube struggles with this quite a bit as well, because it's something you put something kind of clickbaity into your thumbnail, and then suddenly you get a lot of views. But you don't get the views because your content is good. You get the views because you have some clickbaity thumbnail. And that's something that I've seen on the YouTube side kind of go in various directions as well. So I'm not on the YouTube team. I don't know the details of what they're doing internally. It's just, as a user, I see that. In also in the recommended videos they have on the side, those kind of things. I've got a question regarding the integration of multiple news properties on the one domain because we've got several formats which provide news content. But unfortunately, not really all of them are being accepted by the publishing tools. And therefore, I was wondering what has changed or how can we address these features, these issues? I don't know. So specifically around Google News, you'd have to go through the News Publisher team. Feedback form. Oh, you're breaking up. In these cases, there's also a News Publisher help forum where I would go to try to find out more. But the whole setup around Google News is a bit unique. And it's not something that we include in the normal part of search. We are, I don't know if you're from Germany. Your accent sounds vaguely German. We're doing a hangout for German news publishers. I think in a week or two. All right, next one. So if you post your questions there, then I can try to find answers for that a little bit in advance as well. Thank you. All right. Looks like Mihai has been busy answering questions in the chat as well, which is great. Thanks, Mihai. What else is on your mind? I do have one question about the Google Search Console for our recipe site. We launched a bunch of recipes there in September. And some of them are ranking in Google. But then in Google Search Console, under the recipe below the AMP, it says there's zero valid. But then a bunch of them are valid with warnings. And some of the warnings are that we're missing video. But we don't have any video on the recipes. But do we have to have video in order for it to be noted as valid in Google Search Console? Are you seeing the rich result for recipes in the search results? No. No? OK. I think that's one of the trickier parts there in that some of the properties are recommended or things that are available, at least, that we would use if we have them. But they're not required. So I believe the ones that are set as warnings there, those are issues where we look at it and we say, well, it doesn't have a video on it. We could still show it. But it doesn't have a video. So if you had a video, then we would be able to show that video as well. So we're trying to show both the state of something is really broken, which would be a clear error. Or something is you could improve this. So it's an opportunity that you have if you want to go a little bit further than just the baseline recipe with snippets. If we're not showing them at all and they're just shown as warnings in Search Console, then I would assume that it's not a matter of the recipe markup being bad or kind of missing information. But more matter of our systems just not understanding maybe the quality of the website yet. OK. So that's something where what you can sometimes do is a site query for your website. And sometimes you'll see the rich results shown in the site query, but not for normal query results. And that's a pretty strong sign that we can understand that this markup is there. But we're just not showing it because we're not sure about the quality of the website. OK. And that will take probably some time. This is a completely new website. Oh, yeah. If it's a completely new website, it takes a bit of time. If it's an existing website, and you're still seeing this, then that's something where I would try to find a way to improve the quality overall. But if it's completely new, I think that's kind of normal, that it takes a bit for our systems to understand that. Cool. John? Hi. Hi. So I wanted to ask, if some website that's in large numbers for indexing what would be the reason. So I mean, is there any reason from Google site I mean, when they are, let's say, I'm having a different international website. All of our websites are normal, but first indexed, but except one. So is there any particular reason for that? Or I mean, can it happen, like for one site to another, or it can be on geographically based sites, can have first section and then have other batches in the next one, something like that? That can happen. We don't have anything specifically set up for mobile first indexing with regards to location of the website or type of the website. It's more a matter of our classifiers looking at the website and saying it's ready for mobile first indexing or it's not ready for mobile first indexing. So if it's not moved yet, it might just be that it just hasn't moved yet, because we haven't gotten through to the whole web. Or it might be that there are still some issues that you can look at with regards to mobile first indexing. So usually, these fall into things like the text not being complete on the mobile version, maybe the embedded content not being as complete. So images, videos, maybe they're not indexable in the same way. Maybe images are blocked by robots text, those kind of things mixing, or structured data, where structured data might be missing on the mobile site, but it's actually there on the desktop site. There are some other criteria as well that we look at, but those are kind of the three main things that we see issues with a lot of websites. If the website is responsive design, then those issues don't play a role at all. Then it's probably just a matter of time for us to kind of move things over. I suspect over the course of the next year, we'll see more and more of these sites also shifting over. We'll probably also see some tools or some messages in Search Console that make it a little bit easier to understand where the remaining issues are. But for the most part, if it's not mobile first index yet, that's perfectly fine. That's not something you need to work on. Sure. Thank you, John. All right. Wow. We're getting through all of the questions. That's fantastic. Anything else on your mind? I have one. Or is somebody else one? Go for it. OK. So I've noticed while researching some publisher in the United States that certain publishers prefer to write about other brands only when there's some sort of affiliate link that they can use to point towards their products or services or whatever it is. And as far as I know, Google tries its best to figure out these affiliate links and not count them in the whole ranking process. I've noticed that some affiliate networks, I don't know exactly how they do it, but it seems that the link is actually just, it looks like a normal link. It's AHREF and the normal link to a certain product or service. But when you actually click on that link or on the bottom, whatever it is, it actually moves you through the affiliate. It changes your URL to the affiliate network and then goes to the actual website. So, and I assume that they do this because they think that Google just parses the web page, finds the links, doesn't see a affiliate link, so everything's OK and doesn't actually click on a button like a normal user would. So I was wondering if there's something that you're doing to kind of detect and understand that that is, in fact, an affiliate link and you should not take it into account. I think people have been doing this for a really long time. So they're like various tricks that they use to kind of swap out the links in the URLs that are linked there. And as far as I know, the web spam team and the quality teams, they're aware of these techniques as well. So it's something where usually what I see happening is that the webmaster spends a lot of time to come up with this complicated scheme to hide the affiliate links. And in the end, nothing changes. So you add all of this complexity and nothing changes. So from my point of view, I would just use a normal nofollow affiliate link and leave it at that. There's no reason to try to hide that more than that. I don't think it helps the website or the link to website in any way. Right. Yeah. Well, I just mentioned it because it's the first time I see it and it was a pretty big publisher and pretty big brand which was a bit weird. And it was a publisher that linked to multiple brands. Every other one had an affiliate link. There was one that didn't have any affiliate link. It looked a bit suspicious. But when I clicked on it, it just sent me through all of these affiliate steps. And I find that rather weird. Yeah. There are lots of these weird schemes to try to hide those links. I haven't seen anyone where I'd say, oh, wow, they did something really fancy and they had an advantage from that. Usually it's like, wow, so fancy and nothing changes. OK, fine. I can shoot you an example if you just want to. So I have another question regarding some of the generic phrases. For example, if you search for Bodybuilding Supplement Guide, you get the results from bodybuilding.com and they're all the same for the first page. So you get 10 blue links from the same website, bodybuilding.com. And is it possible that in this case, Google was confused that the brand, which is bodybuilding.com, and that the word, which is a generic word, bodybuilding, is the same thing? Because a website shouldn't occupy all the terms regarding bodybuilding. And you can see that this website monopolizes a lot of the words and phrases regarding terms like bodybuilding. It's like, basically, you search for site prefix and bodybuilding.com when you search for something like bodybuilding. Is Google confused in that case that the brand and the generic term are the same thing? And should Google impose limits? Because I believe there were such limits before that. If you have the website listed two or three or four times, you shouldn't list it more because, I mean, nine or 10 times is a bit too much. Yeah, this comes up every now and then. So I think with regards to whether or not we would mix kind of the brand or the domain name and the generic term, that's something that, for the most part, I see us handling fairly well, especially when I get questions from new websites where they say, oh, I bought the domain name. That is keyword1, keyword2.com. And I'm not ranking for my brand name. And when we look at that, we say, well, people who are searching for this, they're not looking for your brand. They're looking for those two keywords. So just because your domain name has those two keywords and it doesn't mean that we would rank it any higher than any other website. So I assume a lot of the ranking with regards to that generic term, bodybuilding, for that website is also due to this website just being around for a long time and being well known for these topics. So it makes sense to show it in search. Yeah, but the guide is not super, super, super high quality. So it shouldn't take 10 kind of links one after the other. I mean, maybe give them two or three, but not 10. That's a different question. I think that's something that we also get feedback on from time to time. It's like you're showing too many of the same results. And I know there are teams at Google that are working on this to try to find the right balance. But it's definitely not the case that we would say two results is the maximum for a website or three or four. Sometimes it makes sense to show a lot of results from the same website. I don't know if in this particular query it makes sense. Maybe there are other things that are highlighting more. But in general, it's something where we go to the quality engineers and they show us different feedback and saying, look, people are looking for this website based on this query, based on what we're seeing. So it makes sense. Or we go to them and say, oh, yeah, this is a mistake. On our side, we need to improve this. We'll take this as a data point. And I know that the teams that work on quality and ranking, they constantly revisit this question of how many results should we show from the same website. That's something that they look at all the time, because sometimes they'll add more. Sometimes it'll go back a little bit. It's really tricky to find that balance, right? John? Hi. Hi. So one more question. So this is regarding working on one Azure website. And the thing is, one is searching for, so you keep it like the best credit card for students. So out of 10 results, I get nine results from the US website. So is it something that Google something to work on, or do you think this is something, again, a kind of generic query to have? So someone looking for in Malaysia for let's say credit cards, so they might want to have those things, but they mostly get the results from the US websites. So how we can tackle all those situations like we wanted to rank for those ones, but it's our team that like, because of that, they have a huge authority over us as a group. So they tend to kind of outrank all of us in the Malaysian market. So how we should be doing that? Yeah, having a strong competitor is always hard. I don't really have an answer that kind of lets you jump over someone who has already kind of built up a really strong presence online in those niches. So that's something where you basically just have to keep working at it, or another approach that a lot of sites take is to say, well, this particular query is very competitive and it'll be very hard for us to rank very high in these search results for this query. Maybe we should focus on a variation that is a little bit more specific, or that is a unique twist of this query that we think people will want to search for in the future that currently isn't being covered by this strong competitor. So those are kind of the different directions that you can take there. And regarding which one you actually end up using, that's kind of more of a strategic question on your side rather than a technical SEO question where I could say, you need to do it like this, or you need to do it like that. Yeah, I think it's like when you use the variations over here, like yes, we tend to rank better in terms of, yes, we are ahead of these players. But when your queries are something or more like I'm not using the country or any other things. So at that point of time, it's something we don't see us as in ranking higher. So that I can see like I'm moving is something going towards like, I mean, okay, this is website is not at that particular level with the overall group. So maybe we then might rank them with a lower than these websites. Yeah, I think that's kind of the normal struggle with SEO, it's like trying to jump above your competitor. That's, yeah, I don't really have the magic answer there because then your competitor would take that and just jump over you again. So it's hard to say. Thank you. All right, let's take a break here. I'll spend some time and double check some things with that website that you sent me, Rob, if you want to hang around. And otherwise, yeah, let's take a break here. It's been great having you all here. Thank you all for joining. Thanks for submitting all of the questions. This afternoon, I'll be in Milan actually at the MBE Summit conference. So if any of you are in Milan, feel free to jump on by or maybe you're there already. So I'll be there. And otherwise, I wish you all a great weekend and I'll see you again in one of the future hangouts, maybe. Thanks a lot and see you all later. Bye John, have a good weekend. Thank you, bye-bye.