 All right, welcome everyone to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst at Google in Switzerland. And part of what we do are these Office Hour Hangouts with webmasters and publishers, SEOs, anyone who kind of cares about search. As always, a bunch of questions were submitted ahead of time, so we can go through some of those. But if any of you who are here live want to get started with the first question, you're welcome to jump on it now. I can start, John. OK. Go for it. Hey, John, I know that my question is a little general question, and I'm sorry about that. But we have a unique platform that uses user-generated content. And we have a lot of duplicate content because of that. Our platform provides users to publish their project, but they also publish it in other platforms. So we have a lot of duplicate content. So we give extra value for our users because we have discussions over there about the product, about the content. So we give extra value. So we were thinking that Google will see that. But we're feeling that we have a flag or a penalize for duplicate content because of that. So we began to know index all of our duplicate content. We want to know if this is the right approach. And frankly, we want to know if we are penalizing, we have a flag because we didn't get any notification from Search Console. So we don't know if we are doing the right thing or not. OK. So if you don't have a notification from Search Console in the manual action setting, then at least you can be kind of sure that there is nothing from Google's side where someone is manually saying there's a problem with this website. So I think that's a big first step. And that sounds like a good thing. The other part is a bit trickier. And it's more about Google trying to understand the relevance of your page's content compared to the rest of the web that we've seen. So that's something that is done algorithmically where we don't have a penalty on your site or anything. But in general, I guess, I don't know your website. So it's hard to say. But in general, what happens is we try to evaluate the quality of a website overall, kind of as a first understanding of how good is this website overall. Is this something that we think could have potential relevant, unique, compelling content things that we should show in the search results? So that's kind of the first step that we will do. And then on a duplicate content level, that's something that's more done on a per URL basis based on the content within the pages themselves. So that's something where it's not so much a matter of kind of penalizing websites for having duplicate content, but more a matter of us recognizing that they're blocks of text that are duplicate either within this website or across multiple websites. And someone is searching for something that is specific to that block of text. Which of these pages should be the one that should be shown? So that's something where it's more a matter of making the right page to show. So I could imagine both of those could potentially be playing a role on your site. On the one hand, if you have a lot of user-generated content and don't have a good control of user-generated content, then it might be something that there's a lot of really low quality user-generated content that's being published on your site. And from our point of view, we don't differentiate between content that other people have published on your website and content that you're publishing. Because essentially, you're publishing content with your website. So if you're seeing that there's a lot of lower quality or bad user-generated content in the published, then that might be something that you could handle on your own. Generally, we recommend finding ways to improve the quality of the content, which might be hard if it's user-generated content. Or removing really low quality user-generated content from the search results, which you can do with a no-intent. This is what we're doing, but we think that we don't have any ranking in high-competitive keywords. For example, we are one of the biggest, it's not the biggest website in our topic. And we're not ranking at all on our main keyword. So we feel like we have been penalized or flagged because users are pointing that this is something that is relevant on that niche, on that topic. But Google has decided not to present it in SERP. So we want to know what is the, if we got punished or not, and if not, why we're not ranking in high-competitive keywords at all, we're talking about a huge website that provides a lot of content and doesn't rank in any competitive keyword at all. That's really hard to say, often. So I think if you're not seeing a manual action, then you're not flagged. So at least from that point of view, there's nothing manual that would be holding back. So I think that's already kind of a good thing. But the trickier part, of course, is the overall picture of your website, how we could look at that as a way of saying, is this really high-quality content overall, or is this something where we might need to be a bit more cautious overall? So that's something that is really hard for me to judge without actually looking at the website. So what I would recommend doing there is maybe starting a thread in the Webmaster Help Forum, and including the details there of your site, some of the queries where you're seeing problems, maybe if you've seen changes over time, some of the kind of dates where you've seen those changes, so that the people there can take a look at that and say, well, on this date, maybe this specific type of algorithm launched, and you're seeing this effect, because maybe there's something that you're missing out on. And you can also send me that, the link to the forum thread as well, directly, either by Twitter, or probably easiest by Twitter now, because there's no Google Plus. I'll leave the site itself in the comment. Yeah, I can tweet you also. OK, cool. Then I can take a look at that as well to kind of see, is there anything on our side that is really holding you back? Or is there perhaps something that you could be doing differently to make it easier for us to understand your higher quality content? Great. OK, thanks. Jala, I want to go ahead. OK, actually, I was joined in the last Google Webmaster, and I have asked you that all the images from my site hasn't been indexed. And from the past week, I was trying to index all my images, but I am unable to index any of the images. The only thing that is in the Google images are the author images, like from the GIFI or things like that. But there is no image indexed back in the Google. And I can't find anything like the HTTP response code is good from the images, and anything from the source code from my site. There are other sites on my hosting. The latest changes that I have done is applied the WebP version of my images, which was suggested by Google forums, and as I have seen it. So what can I do, actually? Maybe if you can post a link to your forum thread in the comments here in the chat, then I can take a look at that for you. OK, I am commenting it now. OK, fantastic. John, I have two quick questions related. Somebody asked me on live if the bugs on Google are more than usual, or you guys are being more transparent than usual, which is it? If they're what? The bugs that you're having on Google, like the search console indexing, manual actions, all these bugs that in the past month seem to be a lot of bugs. Is that happening more often these days, or you're just being more transparent about it? That's the question I got. Yeah, I wouldn't say it's more often, but we definitely ran into a big bug there, which took quite some time to resolve. And because that's very visible to site owners, in particular, through Search Console, that's something that's important for us to tell people about. I think for the most part, people in the normal search results didn't see that effect that strongly, but definitely site owners, when you're looking in Search Console, where you see the details of how the indexing is happening, how the reports are being updated, you definitely see these things fairly clearly. And if we don't talk about them externally, then people will be confused and wonder, is there something that they're doing wrong that they need to fix? And sometimes we see situations where people are confused and they think they should be doing something differently. Breaking things even worse than actually they might be otherwise. So that's part of the reason why we're trying to be as transparent as possible about these types of issues here. So you said that you could see that information in Search Console, but the data not only is paid specifically says that between April 9 and April 26 or 25, all that data for everything minus the performance report is showing is just copying April 26 data. So you're kind of like masking the index coverage issue because that technically started around the fifth, I believe, but it continued through the 11th. So you really can't tell your indexing issues by looking at Search Console data. Well, yeah, because we don't have any data that we could show there. Why is that? I'm not understanding why is the data lost if it was an indexing issue versus? Well, if we don't have it from the indexing side, we can't collect it for Search Console. Right, but that was fixed on the 11th. So what? No, there were issues that were still kind of pending and rolling there. So it was pretty heavy duty. So it's specific. Again, the issues in Search Console were directly related to the indexing bugs from April 5. Yeah. OK, thank you. I don't know if we'll have more information on that in the future. I hope we can do a short blog post or something to at least talk about how these things happen and how we deal with them internally. But I'm not sure if we'll be able to do that. OK, thank you. OK, another question, sir. We all know about the bug that your states you have, like the images are going to create a thin page. So at that time, I was hit by that and the images were indexed as a page. So I now redirect the attachment page to the URL itself. But still, Google is indexing those pages. Like if I search site colon and my site, they're still showing those pages. How can I change it? Like how force Google to know that these are the images that have been redirected and not currently in the index list? I don't quite understand how you're redirecting things. So that seems good. Actually, we all know that your states you have a bug. Like they are redirecting the attachment to a page. And we all know that Google thinks that it is a thin quality page. And maybe it may affect in the ranking. So I redirect it directly to the image, like WP content slash image.jpg or anything like that. But Google is still showing it in the Google search as a page, like slash WGB. And when I click on it, it redirects to the image. But still, in the search results, it is showing like that. OK. So it's kind of an image landing page. Which from our point of view, I think that's fine. So in general, you wouldn't need to do anything special with that. But I think what is happening here is the general kind of configuration when it comes to redirects is we understand that the old and the new URL are kind of related. So if you explicitly do a site colon query for the old URLs, then we'll often show you the old URLs in the search results anyway. Because we think you're explicitly looking for these, so we'll show them to you. Even though we've already processed the reader points. So that's something where if you're explicitly looking for something that you've already redirected, you probably will still see the old thing in the search results, even though we moved on to the new ones. OK. Yeah, I had a question. Sure. I was wondering about siloing, so putting content under a particular directory. So for example, taking data out of slash block and putting it into sections under particular features that we have, for example. And whether or not that improved SEO, whether or not it added credibility to the root page. So let's say you're sending t-shirts, moving some blog about t-shirts from slash blog slash red t-shirts to slash t-shirt slash v-neck slash blog slash red t-shirts. Whether Google thinks that's important or whether they kind of skip over that. I think we would skip over that. So for us, the URL is primarily an identifier. And it's more a matter of how this URL is linked internally. So if this is linked, for example, this post about red t-shirts, if that's linked from your t-shirts page, then we understand those two are kind of related and they should be connected together. And it doesn't really matter if it's in slash blog, slash t-shirts, or slash one, two, three. That URL itself doesn't matter that much for us. So it's more a matter of how you bind these URLs together and how you handle the internal linking. So it's the links that takes priorities, lots of links that take priority rather than the structure. Exactly. And I kind of follow on question, sorry. I think I've noticed that Google gives quite a little priority to the words in a URL, which I guess kind of makes sense because you've only got a certain amount of space. How much does it matter having red and t-shirt in that URL to carry on the example? Or is it really not very important? It's very minimal. So we do use the words in the URL as a small factor, but it's not even strong enough that I would recommend most sites change their URL names just to include those keywords in there. So if you have product IDs and you could switch over to a product name in the URL, I think for the most part you wouldn't see any significant change in search if you were to change those URLs. It's like a really, really small factor. I think if you're starting a new site and you're aware of this and you can make a clean site structure with URLs that are kind of logical, easy to read, then I would go for that. But for the most part, it's not worth reshaping the whole website just to squeeze those words in. OK, thank you very much. Sure. All right, let me check the chat. There's some stuff here. Pretty long one about e-commerce site. Wow, OK. Are you still here, Sandy? Yes. OK, can you maybe run through your question quickly to explain what you're asking about or what the issue is? I think you're muted, so we can't hear you. OK, otherwise let me just read through it a little bit. I think it's a matter of other distributors who are ranking for the products that you're manufacturing and the general question of why are they outranking you and what could you be doing differently? I think the general difficulty here is if this content is available on multiple websites, it can rank on multiple websites. And we don't have anything specific in our algorithms that say this is the manufacturer, therefore it should always rank first. So we essentially see this content across multiple websites, and overall, we'll try to figure out which of these pages is most relevant for the user when they're searching and which of these pages provides the most value for the user, which is the most high quality overall, the most relevant to the specific queries that you're doing. And sometimes that can be the manufacturer. It doesn't have to be the manufacturer. So common use case, for example, might be if someone in Switzerland is searching and there's a local distributor who's selling it as well, then maybe we should show the local version rather than some global version that's located somewhere else. So that's kind of from a geo-targeting point of view. That's something that could be playing a role there. It could just also be playing a role that other sites are doing really well at SEO, from a technical point of view, from an overall quality point of view, from a point of positioning themselves in the ecosystem point of view. And that might also be playing into this. So I think, in short, there's nothing that we would see as a bug on our side that distributors are ranking ahead of a manufacturer website. I think that's something that you kind of have to take into account when it comes to the web, that once multiple sites are offering the same product, they can rank independently. And one of these might rank above yours, even though you're the original provider of that content. So that's something where there's not really a trick that you can do to make that kind of work differently, but it's more a matter of working on the normal SEO factors that other sites would be working on as well, not just from a technical point of view, kind of making sure you have all of the structured data markup in the right place and kind of valid HTML or whatever other technical issues there, but also from an overall quality point of view that you're providing something that users want and that you're embedded in the ecosystem in a way that users recognize you as an authority in this topic. So no kind of quick and easy answer to that one. Let's see. We moved our website last week and we started to use HTTPS 2.1. After this change, Google started to spend more time downloading our pages according to the old Search Console graphics. This is about HTTPS 2.1 and a bad thing. If it is, how can we fix it? I think in general, moving to HTTPS is fine using HTTP2 is also fine. There are ways to configure that so that you have kind of optimal value out of that configuration. From a Googlebot point of view, from a crawling point of view, we don't use HTTP2 because we're not a traditional browser. So that's not something that would affect the normal Google crawling and indexing. It sounds more like if you're seeing significant changes in the way that we're crawling content on your website, then that might be related to the server change that you made. So it's a kind of other problem, but it was my question. Hello, by the way. Sorry? It was my question. And so it's not about HTTP2.1. It is about something else, right? Yes, that's my suspicion. Especially if you're seeing more time downloading individual pages, that can be multiple things. So it could be that Googlebot is crawling more complicated, larger pages from your website, which might be fine. It could also be that Googlebot is just needing more time to download pages, which means that your server is slower. So what you can do there, if you want to be sure that it's one or the other, is double check your server logs to see if Googlebot is really crawling a general sample of your pages and not just focusing on large or complex pages on your website. OK, thank you. May I ask another question? Sure. If I post my contents into the social media content sharing sites with source links, is it a bad thing or a good thing? I think generally that's good. So for search, we don't use that as a way of understanding links to a website because, for the most part, links from social media are no follow by default. But this content can also appear in search. It's also a page that's publicly available. So that can be visible in search, which could be good for you. And in general, having a diverse set of ways that people reach your website, I think, is always really useful. So the more you can spread out the sources of traffic, the less you'll be dependent on any one particular source of traffic. So if something happens in Google search and suddenly your site is not shown anymore, then you still have all of these social media channels that you've been working on where people can find your content and go to your website directly too. OK, thank you, John. Sure. All right, let me run through some of the submitted links questions because people have been submitting them. So I just want to look at that. My question is about verifying ownership on Google according to this post. I followed the instructions and searched for my brand on Google. However, there's no option to claim the Knowledge Panel in the search results. We're a pretty big news publisher. Any reason why we can't claim the Knowledge Panel and any idea when this will change, I don't know. So in general, the claiming of the Knowledge Panel is something that should be possible if you have your website verified or one of these kind of primary properties verified in Search Console. I don't know why that might not work. What I would do is maybe post that in the Webmaster Help Forum, and we can take a look at that if you send me the link to the thread. I published new buying guides meant to rank for some low competition keywords, good on-page SEO. Two weeks later, they were nowhere in the search results. Interestingly, trying a Google search for those keywords from some European IP addresses shows my page on page 2. This is related to the recent indexing bugs. No, this would not be related to any indexing bugs. Essentially, if they're shown from European IP addresses, then they are indexed normally. So that sounds more like it's just a generic kind of ranking question. Why is my content not ranking as well as I wanted to be in the US or in whichever countries that you're currently located in? And that's obviously kind of hard to address. I think in general, what I would watch out for, I don't know your website, so I'm just guessing based on the keywords you have in your question, like the buying guides and low competition keywords, my worry is that maybe some of this content will be some of low quality. And if you have a lot of low quality content across your website, then we might assume that your website overall is not as high quality as it is. So that's something where I would generally watch out for that, just to make sure that you're not just throwing content out there for the sake of having content there, but rather building up a website that is built on high quality content that you can rely on to be really useful and relevant for the long run. And that's something we do pick up for ranking as well. So it's not like we would not index. We would so index it, obviously, even if it were lower quality content, but especially for ranking, when you're seeing it sometimes in search and sometimes not, the generally high quality website is something that we watch out for. If your site is in the industry where search volume buy on mobile is very low, which search signals do you recommend focusing on most? Are there signals that are more important for desktop rankings and maybe not as important for mobile? What would that be? So in general, I don't think it makes sense to focus on specific kind of device type metrics and to try to focus on specific things that you'd need to do there. I would try to work on the website overall. The thing that I would also watch out for is falling into the trap of there are no mobile users of our website. Therefore, we won't make a good mobile version of our website because that's often something where maybe you don't realize that there are actually more mobile users than you thought. And they're just avoiding your website because it doesn't work well on mobile. Or maybe you don't realize that at some point this might change, where actually there are a bunch of mobile users who are interested in your website and you never realize that, therefore, you never target them. So that's one thing to keep in mind there. The other is that we're switching to mobile first indexing across the board pretty much. So we're taking sites as we see that they're ready. And once your website is switched to mobile first indexing, we will index your site using a mobile Googlebot. So if your site does not work well on mobile, if it doesn't have the full content on mobile, then that's something that will definitely be reflected in search. And it will be reflected for both desktop and mobile social zones. So if we switch to mobile first indexing for your website and your mobile site does not have the full content, then your desktop site will also not rank with the full content. So you really need to make sure that you're not kind of avoiding having a good mobile site. John, how far longer you or Google with the mobile first indexing? I know last time you said you were over 50%, but it seems like a kind of pause from there, at least from the notifications going out and widespread. We're moving. We're moving. So it's a tough problem. So it's coming step by step. I suspect at some point we'll have some more announcements on the next steps that we've taken and where we're headed from here. In general, I think that the team is on track here and we're seeing that a lot of the web is really ready for mobile first indexing, which is kind of reflected in that over 50% of the sites that we show have switched. So that's, I think, overall, really good sign. Yeah. I noticed these fields in the index coverage report can't be accessed with the API errors, but there is any other way to get the troubles? So at the moment, we don't have any API for the index coverage report. So it's not just individual fields. There's no API at all for the index coverage report. If you'd like to have an API for the index coverage report, then tell us about it and tell us what you'd like to do with it so that we can take that to the team and discuss the options there. It's always tricky when it comes to creating APIs for features like this, but I personally love to see more usage of the API and more APIs available. So if you give us some information that we can pass on and encourage the team to move more in that direction, I think that would make me more happy. Maybe that would help you as well. Let's see. I'm curious whether links lose their potency over time, meaning if a publisher was linked to, say, by the Washington Post in 2009. And assuming the Post Scientist maintained itself and paid rank is that link as powerful and as useful to the publisher a decade later or hasn't diminished over time. So essentially, that particular link is still that link. So when we look at that linking page and the linking destination page, that's still kind of the same thing. But the way that page rank works is that it distributes and is propagated across all of the pages on the web, across all the pages on the site. So if the Washington Post, for example, has paused publishing since 2009 and nothing has changed on the website since then, that would be kind of the same. On the other hand, if they've continued publishing content and they've created millions of new pages since then, then suddenly that one link that you have from the Washington Post will be one out of millions of other pages. So that means that it's not that this link artificially has a deadline attached to it that it's no longer valid after a certain number of days, but it's more that the whole web has evolved since that time. And that means that in relationship to the rest of the web, that particular link is probably seen differently now than it was back then. So again, it's not that there's any artificial factor involved saying this is an old link, therefore it's not as worthwhile, but more that the whole web has moved on over time. And that can change the effect of that particular link there. Obviously, if it turns out that this article that the Washington Post wrote back then is really important suddenly. And people are really kind of pointing at that again, saying, look at this great article and look at the site that they link to there, then suddenly that article might be even more relevant, meaning that link could be more useful. But that, again, depends on the way that the rest of the web evolves. The Google Webmaster updated their Twitter account today with regards to the missing indexing data that's been resolved and the backlinks report, which is still kind of stuck, unfortunately. We talked about this briefly beforehand. And unfortunately, some of these indexing issues are really tricky to resolve, and they take a bit of time to get everything updated again. What happened with this particular report or the indexing report in general is that from the indexing side, there were issues that meant we had to pause and search console pipelines. And we could resume them again in the meantime, but we weren't able to kind of backfill the missing data there. So I think the question goes on with regards to cache pages and things like that. In general, I think one thing to watch out for with regards to cache pages is that they can be disconnected from the indexing side. So that's something that we regularly see, that people look at a cache page and say, well, the cache date is this, but index is something differently, like what's happening here at Google. And from our point of view, that can be completely normal. So I've seen a bunch of sites that mentioned that they have an older cache date at the moment. And from my point of view, that's not really to worry about. From the indexing point of view, we essentially have the new and the fresh content too. So I wouldn't worry too much about the date that's shown in the cache page. We've been waiting a couple of months for a title change on a site to show up properly in search. How long does this usually take? Good question. There is no fixed time for a title change to be visible in the search results. And it might be that our algorithms are showing a different title on purpose. So those are kind of two tricky aspects there. Essentially, if we can see that the title that you're providing on the page is short and concise, it matches what the page is about. And it's not just a collection of keywords that you want to show up and search for. Then we'll try to use that title in the search results. On the other hand, if it's a really long title, if there are lots of keywords in there that looks more like a collection of keyword stuff words rather than a useful link that we could show to webmasters, to users, then our algorithms will probably try to pick a different title for that page. And we'll show that title. So my recommendation there would be first to double check that the new title on your page is really something kind of short and compelling and works well for the page. And then perhaps use the submit to indexing tool within Search Console, which is through the URL inspection tool to let us know that you've updated this page so that we can take a look and reprocess that. How long or how much does it take for a video, included on a specific web page, and not hosted on YouTube to be indexed by Googlebot, we've been waiting more than a month and a half and nothing has happened. I don't know. I think this is one of those things where I don't know if we have any explicit timelines on that. So specifically for images and videos, I know that things sometimes take a little bit longer to be processed. And that's one side there. On the other hand, with video content in particular, what happens there when we can recognize that the video is on the page, which we can pick up usually with the structured data markup on the page, is we have to first confirm that this video and this page actually work well together. So we need to see that this is actually a reasonable video landing page, not just a random web page that happens to have a video someone on the page as well. So that's kind of something additional that plays a role here in that we really have to confirm that this is actually a good, good working pair of pages that work well together. And when that's the case, then we will show that appropriately in video search or in the video search mode. And when that makes sense also with the video thumbnail in the search results. But that's something that you might watch out for. That's assuming from a technical point of view you have everything covered. Sounds like you know a little bit of what you're doing there. So you're using a video format that works well for Google. You're not blocking these URLs with robots.txt. We can fetch the videos from your server. They're not blocked in the US, for example, or blocked in the location where Googlebot is crawling from. Googlebot is generally not blocked from crawling these kind of URLs through the robots.txt file or other server directives that you have. All of these things kind of have to work together. The video thumbnail, same things apply there as well. So all of that kind of needs to work together for that to work. Sometimes using a video hosting platform makes this easier because you can pretty much rely on the video hosting platform to take care of all of these technical details on their own. On the other hand, often with a video hosting platform you have their video landing page compared to your video landing page. And then it's a matter of our algorithms trying to figure out which one of these landing pages is the best one to show to users. So that's kind of a downside there. I work on this website. Our website is React-based online platform for DIYers with over 70 million registered users and 23 million monthly visits. Oh, I think we looked at this one before. So that seems like one that we can take a look at. We have one of the biggest news publishers in India as a client. One problem they're facing is crawl budget. Google crawls a website on an average of 400,000 pages a day while we don't add more than 300 pages a day. That seems like a big mismatch, yeah? One problem I feel is the infinite scroll on the site. Could you help us understand how the Googlebot actually react to the digital page with infinite scroll? So I think, offhand, the first thing I would do here, if you're seeing such a big mismatch, is double check the server box so that you can see which URLs are actually being crawled. And based on the URLs that are being crawled, you can kind of work backwards from there and see what might be happening here. Maybe you have some JavaScript files, for example, that have a session ID attached to it or a server response with a session ID in the URL. All of these things can really blow up the number of URLs that need to be crawled in order for us to pick up these pages. And all of these kind of aspects from looking at the log files will tell you a lot more about which areas you might want to focus on. And they might also tell you if this is actually a problem or not. So what could be happening here is we're picking up those 300 new pages a day that you're publishing. And we're also checking a lot of other URLs that are maybe older or a lot of URLs that are embedded on these pages as well. So if we're picking up the right pages and you don't have a problem with us crawling a little bit more, on this case, a lot more, then maybe that's fine and not something that you critically need to handle. On other hand, if the server logs say that Googlebot is crawling everything except for those 300 new pages, then you definitely need to figure out what's happening here and why Googlebot is getting confused with your site structure. Let's see that. And the internet scroll might be playing into that, but it doesn't have to. So it might be that there's something completely different. My suspicion is that it's more a matter of content that's embedded on the page rather than an infinite scroll. But that's something you would probably see a lot faster with looking at the log files. With infinite scroll, what generally happens is we try to render a page, and we'll expand the viewport when we render, and then we'll contract it again to try to fit the primary content into that viewport. And with infinite scroll, that might trigger maybe one or two more scrolls to get more content in, and then we would render that part of the page. But it wouldn't trigger us infinitely crawling everything every time we crawl any particular page on the site. So it's not that we would expand to infinity from your infinite scroll. The one thing I would watch out for, though, especially on news websites, is if you have infinite scroll, then it's very easy to run into a situation where you have two or more articles on the same HTML page. And from a news point of view, that can make it really confusing, because suddenly you have multiple topics on the page. It's hard for us to tell what is the primary news article on this page. You have multiple dates on the page, perhaps. And that makes it hard for us to tell what is the actual publishing date of this piece of news. So all of these things kind of combine. And for me, I would say for news websites, I'd be kind of cautious with infinite scroll and perhaps try to avoid that. In general, it's something you can do if you feel that this makes sense, especially if you're looking, for example, at paginated content. And it's like multiple types of shoes, and they're all a giant list. And you have a pagination that is happening that maybe infinite scroll is the right thing. But if these are really completely independent pieces of content that you have, then maybe it makes more sense to index those individually. I'm noticing a strange issue with e-commerce sites and Google Images. The site owner said that no images are indexed from category and product pages and images and site commands supports that. The images are hosted on a subdomain and provided in XML sitemaps. But when checking search console and filtering by image search, you can see that the product pages do rank in Google Images, and they are indexed. Images hosted on the subdomain. So now that image sitemaps reporting is deprecated, how do I check how many of these images are actually indexed? I don't know. Good question. So this seems like a good use case to bring to the search console team to say, we really need those image sitemap numbers back or in the new search console. Maybe that's something we can look at there. I believe there's also another search operator that you can use in Google Images to pick up the images in specifically. I don't know if that's source colon or something like that, but we've talked about that in the past as well, where you can specifically look at the image URLs and try to find those. So that might be one thing to look at and try there as well. Is there ever an equation where a better targeted page for search would be returned over a home page, which ranks for the same query? Given that Google doesn't want to give two URLs the same domain, it seems to favor lesser targeted home page over a better targeted inner page. I think that's always a struggle from our side. On the one hand, the home page is one that we know is really relevant for the website. On the other hand, maybe a detailed page is something that's very specific for this particular query. And finding that balance between those two types of pages is sometimes what happens is we just show both of them in the search results. But it's not the case that there's anything specific that you can do and say, I never want my home page to rank if this other page also ranks. It's not really that trivial to kind of force that in one direction or the other. The things that I'd recommend doing if you're unhappy with the way that we're picking the pages to show in search is to really just make sure that there is relevant as possible within your website. So if you have a product page that's really important for you, then link to it from the home page so that we really know that actually anyone looking at this website should go to this one product page because that's really where all of the critical information is. So kind of the internal structuring of these pages, that really helps us. Also, the cross-linking within the content of the website, so it's like if you look at a category, you can find that product quickly. If you look at related products, you can find that primary product that you really care about really quickly. Anything that you can do to really make that particular product that you care about stand out as much as possible. We had a lot of trouble getting search engines to show a comma within a large number in our meta descriptions. We noticed when we picked it up from the text of the page and work, but not in the description, is there something that we should be doing differently? I'm not aware of anything specific there. So when I saw this question initially, I was kind of surprised that this is even a problem. But if you still have an example where you're seeing this happening, I'd love to take a look and pass that on to your team. That seems like something that might be a small bug or something that we can fix on our side, which might make it a little bit easier to show those numbers properly or better in the description. Do you still recommend doing a video site map? Yes, video site maps are definitely useful. When Google Webmasters tweeted about canonical URLs, but my traffic dropped by 50%, our analysis tool says my website's different blog posts were ranking for the same particular keyword. So my question is, what could be the reason for this traffic drop? Or should I look for other reasons? I think on the 25th of April was kind of when we announced that these issues were resolved. So I would be surprised if you're still seeing anything specific based on those indexing issues. And if you are still seeing ranking changes there, I would assume that those are normal ranking changes and not something specific to this indexing issue that we had. And with normal ranking changes, obviously, lots of things. Could be playing a role there. So it's really hard to say. May I jump in? Sure. Just maybe one last thing, like the last question that I have here about Google Jobs listings and the logo. I passed that on to the Jobs team to take a look at that as well. So maybe they have some advice that they can post in the forum for that specific question. OK, your turn. Thank you. In my country, Wikipedia is forbidden, unfortunately. And we have some backlinks from the Wikipedia as we think locally. These links may harm me or not. I won't. If those links are crawlable by Google from where Google is crawling, then those links will be normal links. Even if it is a illegal website in my country, right? I can't make a judgment call with regards to legal or not legal, just purely from a technical point of view. If Google can crawl those links, then we can use those links in our algorithms. So if you don't want those links to be taken into account, you can use a disavow file. Actually, I want to count, but I have a fear of it. That's my IS. OK, that sounds like it should be OK. But it sounds like a tricky situation. So I don't know. I wouldn't want to change places with you just because of that. I think it's always tricky when there are bigger legal issues around the websites that you're interacting with. Yes. Thank you, John. Sure. All right. Any last questions before we jump out? Hi, John. Hi. John, I have one simple question. How much is important for online shops the page for delivery payments, returning policy, and something like this page for delivery and payments? So primarily from a search point of view, that's something that we could be showing in the search results. So if someone is searching for kind of returns for your company name, then maybe they would want to see that in search. But that's essentially the primary thing that you would see there. I think from a user point of view, users do care about this information as well. So just purely from a user's point of view, that's something that I would consider making sure that you have that on your website and that it's really well documented so that anyone who's looking at your pages can say, this is a legitimate business. And if I have any problems, then I can return things here. If I have customer service questions, I can call them. I can email them. There's a contact form, all of that. So I think from a user point of view, that's definitely important. OK, thanks. Sure. OK, let me just see the chat. If I'm missing anything critical, does Google care about the amount of comments on a blog post or product page? No, we don't count them. For a saying, but anything that is within your page, be it a comment or something else, is content that could be used for indexing that page. So if there's something kind of important or phrased in a way that's not phrased within your product page in the comments, then that's a good way to make that visible in search. Thank you. I found a search result in Google. It's older than five years and has a 410 status code, but still indexed. How could it be? I don't know. So the link that you gave there is, it sounds like a technical status code page. So maybe they're doing something sneaky for Google. I think this particular case might be one where I would say if they're doing something sneaky for Google, then I don't think that really matters so much because it's kind of a useful page like that. But I wonder if people use this method to make 404 pages to index. Yeah, I think you wouldn't get much value out of that. So that's kind of the tricky part. It's like you can make a page that says this page no longer exists and returns a 200 code, and that would be the same. So that's something where I don't think you would really win anything from that. It would be like a soft phone port. I wouldn't win, but it may be an anti-essue method. That's why I wonder how they do that. So my just general guess based on the URL, which is getstatuscode.com, is that they're just doing this to have a sample page, but also have a page that is indexable in Google. Like in this particular case, I wouldn't worry too much from a Google point of view. OK, thank you, John, again. Sure. How can I send you site details? Yeah, I guess the easiest way is on Twitter. If you kind of ping me on Twitter, I can follow you and then you can send me a private message. And I can take a look at that. All of the private messages, I get, generally, I pass them on to the teams here that are working on these things. I can't promise that I have a real response for you one to one, because there's lots of people that ping me. But I do pass them on to your team if there's something that they need to get back to you about, then I'll let you know about that. Otherwise, assume that the team is looking into it and trying to prioritize that somehow within all of the other things that they have to do. OK, wow, we made it to the end, and not too far out of time. OK, cool. Thank you all for joining. I hope this was useful. I have the next session lined up for Friday morning, European time, if people want to join in. And next week, I'll be at Google I.O. So if any of you are at Google I.O., feel free to drop by the web sandbox area and say hi. We also have office hours and some sessions for you. I think it'll be pretty cool, pretty intense, but cool. And then afterwards, we'll have our normal office hours hangouts again. Thank you all for joining in. Thank you, John. See you next time. Bye, everyone. Thank you.