 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. We just have two guests here. They're fantastic guests, though, so that's good. Maybe more of you will jump in later on as well. The link is in the description and in the comment field where you can leave questions. As always, a bunch of people submitted questions, so we can go through those. But if either of you want to get started with the first question, go for it. Go for it. So I have a website, which is doing fairly well now. But it's not doing that great. I'm not getting that much user. So do you think it's a good idea if I start talking on Quora questions and replying to that and to read it some extent, put some article about it. This is a good idea to get something about it. About it? Possibly. So I think in terms of SEO, we probably wouldn't pick up those links because they're generally with no follow. So they wouldn't have any effect on the search side of things. But if you're drawing awareness to your content and your content is good and people come to your website and then they link to your website because it's a good website, then that's a good thing. And especially if you have a new website, you have to get things rolling somehow. So that's something where sometimes it makes sense to go to these other platforms where similarly-minded users are and to engage with them there and to make it possible to find your website, of course. But I think the important part there is speaking as someone who goes through a lot of these communities, you have to go there and generally join the community and engage there. You can't just leave a five-word comment and say, oh, that was a good question. Here's a link to my website. No, no, I'm not talking about spamming. This guy is just spamming. Yeah. Yeah, I'm not talking about spamming or anything, but the relevant answers or questions where I can socialize more and tell the users about my websites and everything, that would be helpful. I mean, again, from an SEO point of view, it doesn't really affect SEO. It doesn't affect search, but it can be a good thing for your website. So if people are coming to your website and they find it's a great resource, and in turn, they recommend it to other people, that's a good cycle. I think that's generally good. OK. And I have one more question. If I have external links, do you think it's a good thing to have external links on a website or is this a handling? No, absolutely. Yeah, absolutely. Link to other sites, that's perfect. That's how the web works. I think, in general, also for users, that's really useful because they go to your website and they see that not only is there information on your website that you're providing, but you're also giving a source or you're giving credit to where you learned about this information or you have links to find out more information if someone wants to go in more depth. And all of those are good things. So definitely put links on your website and let users use them. Time's up. Sure. All right. Let me jump through some of the questions that were submitted. So the first one is a pretty long one. I'll shorten it a little bit because there's a lot of information, a lot of links in there. So I think, in general, it goes into the topic of the user here calls it white label sites. So in particular, they were seeing things like, what was it? I think like price comparison sites or similar things. What was it? Coupon sites, I think, do this as well, where a company will take a subdomain of an existing website and put their content there and use that to promote their content. And I think this user was seeing a lot of those in the search results and kind of wondering like, is it even worthwhile to make my own website or should I just buy space with an existing website and put all of my content there? And I think the answer is tricky in the sense that on the one hand, the user is saying, they don't like this practice. On the other hand, they're like, well, should I do it too? I don't like it, but should I do it too? And I think that's kind of a tricky place to be. In general, when it comes to these kind of things, we do take these types of feedbacks very seriously. I know that the search leads at Google have been talking about this exact topic for a while now to try to find ways to handle these appropriately. So by handling them appropriately, I don't mean we should treat them as spam and just delete all of these subdomains because they're not really spam. They're just kind of sales pages, affiliate pages, that are hosted within another website. Maybe the right approach is to find a way to figure out what is the primary topic of this website and focus more on that. And then kind of leave these other things on the side. The other aspect that always plays into these kind of configurations on websites is when it comes to quality, we try to look at the quality of a website overall. So if there are particular parts of a website that are really low quality, I don't know if these are really low quality coupon sites, for example, where the coupons are essentially just the same thing as everywhere else on the site or everywhere else on the web, then overall, that could be degrading the quality of that site a little bit. So there are various aspects that come to play here. I don't think there is one clear cut answer where we'd say this is exactly what Google should do and delete all of these sites or demote them completely or get rid of all coupon sites to be fair or I don't know. And these kind of situations, I always find them very interesting because I see in the discussions with the search quality leads how nuanced they try to look at this problem and how they really try to weigh the pros and cons. And the changes for these kind of sites or this kind of feedback, it generally takes a while to kind of run through. It's not something where the search leads will just say, oh, we should do this. And then that's done across the board. These are things that we have to evaluate fairly carefully and make sure that we don't cause any more problems by kind of taking action on one thing and then accidentally breaking something else. So I really appreciate all the feedback that you've been sending and all of the compilations that you've been doing about the details that you found there. And it is something that's useful for us to kind of help improve the quality of the search results overall. On Google Producer, we've been using our AMP version for the past five months. And for the past two weeks, without changing anything, our website, our AMP pages were working fine. For some reason, Google News shows our feed version instead. I have no idea how Google Producer works and how this kind of connection with Google News works, so I really can't help there. I do know, however, in the Google News for Publisher Help Center, there is an ability to send, I think, a contact form or even a live chat where you can send something directly to the team that works on Google News. So I would recommend doing that and getting kind of help from them directly rather than through the webmaster side. Because from our point of view, Google News is kind of a separate thing. Regarding structured data, which is only used for semantic reasons, not for search results features. How does Google deal with data types and properties which are in pending status at schema.org? So pending, for example, with full description on schema.org but maybe changed somehow in the future when being integrated binding with. So this is always an interesting question, which comes up from time to time. Primarily, when you use structured data on your web pages, you would use that to kind of reach some specific search feature. So in particular, maybe you want the star ratings to be shown or you want your recipes to be shown in that recipe specific way, then you use the right structured data to make that happen. And the requirements for the structured data side, they're all documented in the Google Developer's guide. So you can go check it out there. However, there are a ton more types of structured data that are available that are listed in schema.org. Some things are fairly obscure. Some things seem pretty unreasonable. And in general, what happens there is we do pick up that structured data on the pages. All right? But what happens in that sense is that we recognize that structured data on the pages. We try to understand which entities and what relationships are kind of being used there. And we use that to subtly try to improve the relevance of that page in the search results. So it's not that you'd have better rankings or that you'd have any particular search feature. It's more that we'd be able to better understand that, oh, this is about this specific entity. Therefore, I should show it more for queries that are around that entity. It helps us a little bit with kind of the relevance. It doesn't really change much from the ranking point of view. From a personal point of view, I think it's fine to add more types of structured data to your pages, but you just have to keep in mind there's not going to be an immediate effect. There's not going to be a visible effect from that. So usually, I recommend kind of taking a cautious approach to that and not just saying, oh, all types of schema.org markup. I'll put that on my page and use all of that. And maybe my site will rank higher. Probably that won't happen. So it's like finding to go a little bit further than the requirements for a specific search feature. But I'd recommend trying to be reasonable about that and thinking about what you really want to achieve with all of this markup. And if things are already kind of working well, then maybe you don't really need that. Hey, Tron. Hi. Hey, so my content is regarding structured data. Like, I mean, I've seen a lot of websites are using different types of schema on their listing or the category pages. So like, I mean, what's the Google recommend about that? Basically, I mean, I've seen websites with all the products they have, with the individual products they have or the kind of it sometimes with the item list or something like that. So is there any recommendation around that? I mean, how should we go and do this markup for those listing pages? I would just, so for the most part, when it comes to specific recommendations like that, I would stick to the guidelines that we have in the Google Developer documentation, in particular with regards to specific features that you want to target. And there are some guidelines with regards to what you can do on listing pages. Should you mark up all items on the listing page? Should you have an aggregate price for a listing page? Usually no, because these are different products. But there are different guidelines with regards to different page types in the documentation. So I double-check that there. If you want to go past that, kind of say, well, I also want to give a lot more structured data that doesn't result in any visible search feature, again, that's totally up to you. And it's fine to do that if you want to do that. The other thing I keep in mind is, if you just take a look at other websites that are out there, a lot of sites do things wrong. So just because one website is doing something specific doesn't mean that they're profiting from doing that. A lot of websites do structured data wrong. A lot of websites do other technical things wrong. So I see this a lot when people come to me and say, well, this Google page is doing this thing wrong. And you can see that they're not being indexed properly. It's like, should we also do it like that? Because Google is doing it like that. And usually the answer is, well, people working on these sites are developers like people working on all other sites. And maybe they don't know all of the information that you already know when it comes to SEO. So just because someone else is doing something doesn't mean that they're doing it right or that they're profiting from it or that you should go off and copy what they're doing. Sure, thanks, Sean. That's exactly my question, like I mean, I've seen sites were putting this aggregate ratings and all and they're getting those things. Basically, I'm kind of wondering whether these sites are going to be doing it or maybe they're kind of know like this kind of permitting thing and can trigger penalty or something like that. So that was what I was wondering. Yeah, I mean, the other thing to keep in mind is that sometimes people do things wrong and our algorithms don't catch it properly and then they still get the rich results. Even though when you look at it, you're saying, oh, Google should recognize that this is spammy. And we do work to improve our kind of web spam algorithms to recognize these things automatically. But sometimes things slip through. And just because someone slips through with something that they're doing that you recognize is wrong, from my point of view, it doesn't mean that you should be doing it as well. Yeah, cool, thanks. Good day, John. Hi. How's it going? Pretty good. How about you? I'm doing well. Thank you. I had a few things on my mind. I'll try not to take too much of the time up. But one general question is, is there a way to identify every single URL from my website that's currently in the Google index? Or is it limited to a certain number of URLs that I can see? I don't think you can identify all of them. So OK. I mean, in Search Console, there is some data that you could pull out. I think in the index coverage reports, but that's also limited, I think, to 1,000 per category or something. So for larger websites, that wouldn't work. We've discussed this with the Search Console team every now and then. And I think it's really tricky, because some websites have a ton of URLs. And I don't know how useful it would be to just send a download file, like here are 10 million URLs we've seen from your website. They don't get any traffic, but we know they exist. We have them indexed. I don't know. But if you run across use cases where you'd say, well, this is what I needed for, and this is how I can make my website fantastic if I had this data, then let us know. Like more arguments are always good. Maybe I'll type an example in the chat, because I think it's a very specific example that I have in mind, like you said. But maybe if you see that use case, it might provide some other insights or ideas. So I'll type that up for you. OK, cool. And then this is regarding link building. So I probably get several emails per week. And it's people saying, I'd like to write an article for your site. Can I write an article? And in exchange, you place a link back. And I think I remember you saying at one time, that's not really a recommended practice for link building. And that Google really cares about natural links? Is that correct? That's correct, yes. OK. Could I get an example of, say, what a good link would be then? A good link. So I mean, the traditional good link is someone who comes across your website and thinks it's a fantastic website and recommends it to other people with a link. So yeah, so basically, on the one hand, that involves some amount of self-promotion from your site. Like you have to get some people to come and visit your website somehow so that they can recognize that this is actually a good website. And there are lots of ways that you can do that. And then that also involves one of those people or some of those people going, well, this is a really fantastic website. And I have another website that I can link from where I can link to your website. So it's not the case that every visitor coming to your website will say it's a fantastic website. And I also have a website. And let me link to your website from my website. But some of these people, they can't. Right. I mean, it seems like it would be a rarity. I've never really engaged in proactive link building. But I honestly feel like we've kind of fallen behind because of that. I think other websites, maybe in the niche, have a lot more inbound links. And I'm not sure whether they're actively reaching out to other webmasters. Hey, can you link to us? Can you link to us? I'm not sure. I should really do some more research on that. But I think for some reason there's an importance there, right? I think it's tricky because on the one hand, it is sometimes useful to reach out to people and say, like, hey, look at my website. It's like, you have a great website. I have a great website. Take a look at my content. Our content kind of aligns. Maybe you'd be able to recommend my content if you like it as well. I mean, there are different ways of framing that. There are lots of really kind of more spammy ways of doing it. Like you mentioned, like people are just saying, well, look, I have this web page that matches five keywords on your other web page. Can you link to it? That's not really that useful. But the other thing to keep in mind is we use a lot of different things for ranking. And it's not just links. So especially if you're active in an area where people tend not to link a lot, that's something that the other competitors have to work with as well. And where we do try to pick up other signals to see, like, this is actually a pretty good website. Links are really important for us to find content initially. So it's like, if nobody links to your website ever, then we're going to have a hard time recognizing that it even exists. But once we've kind of found it and started crawling it and started indexing its content and seeing that it's pretty good stuff, then those links are generally not that critical anymore. OK, thank you. Appreciate that. Do you recall about a month ago I might have asked, like, if I search for my site's trademark name, it's a very specific site name, it doesn't really rank first. There are other websites that mention the site's name that rank above it. Do you recall anything like that? It happens every now and then. OK, I just think, like I said, I think we've faced some sort of major algorithmic deal or issue. And I wasn't sure where to start or how to figure that out. But one of the telltale signs is we have this very specific name. Like, if you search for Twitter, you're going to expect Twitter to show up as the top result. That's not really happening. So all right, I just want to tell you. OK. I mean, if it's something where you'd say, this is an error in the search results, then drop me a link to a search results page, and I can pass that on as well. OK, we'll do. Thank you very much, John. Sure. All right, let me run through some more of the submitted questions and let me run through some more questions, and I'll get back to you. Perfect, thank you. All right, we're working on a page speed initiative to defer images that are outside the viewport on the page load per recommendation of page speed insights. Our concern is that the crawler will have trouble finding the images that are below the fold since they don't show up in the initial page load. How shall we balance the recommendations for speed and the ability for crawling and indexing? So this is something that we've recognized as well. So we have multiple recommendations for lazy loading of images, which is kind of this deferred loading of the images that work well in search as well. And in particular, with the new Chrome version, there is an attribute I think, I forgot the name, that you can just add to normal image tags and it'll automatically do the deferred loading for you. And using that attribute would work perfectly for search because it's a normal image tag otherwise. So that might be an option that you could go or one of the other options that we have listed for kind of lazy loading of images that work in a search friendly way. The other thing that you can also think about here is whether or not you need to have all of those images indexed at all. So in particular, think about how users might be using kind of this visual search to find your website and whether or not those images are actually kind of a critical part of that path to your website. So if these are, for example, different product images and people are searching for the different product types on Google Images, then definitely those should be indexed. But if these are more kind of like stock photo images that are more for decoration on those pages, then maybe that's not something that you actually need to have indexed in Google Images because it's not like a perfect segue to your content from Google Images. So that's something to kind of think about. We redirected several old articles to new URLs with updated content, but the old URLs are still in the XML sitement. We let hurt our site to have the errors redirected URLs in the sitement file. It won't hurt your site in this sense because even if we send people to the old URLs, they'll be redirected to the new ones. So it's not a critical thing. However, when it comes to picking a URL that we show in Search, we use multiple factors to pick those URLs. And one of them includes redirects. So if you're redirecting to a new page, then we'll try to treat that as a sign that you want that new URL to be shown in Search. The other is sitemap files, where if you submit a URL in a sitemap file, we'll think, well, this is the one that you want to have shown in Search. So it ends up being a kind of a conflict on our side that you say this one is the one that you want to have shown, and you also say you want the other one to be shown. And we'll pick one of those. We might go to the sitemap file. We might go to the redirection target. And essentially, that's the page that we'll show in Search. The good part, like I mentioned here, is that it doesn't change rankings. It's only a matter of which URL is shown in Search. So we find the content. We can rank that content normally. It's really just like which of these URLs is the one that we show in Search. Then we have a giant question on page rank flow and the value of anchor text in links. So I think just ahead of all of these detail questions, a lot of these kind of technical details are things that we don't have defined in a way that webmasters externally can follow them as technical documentation. So on the one hand, if you find things that work in one particular way, that doesn't mean that they're doing that by definition or that they will always do that. It might be that it changes over time. And in particular, when it comes to links, when it comes to page rank, between pages, these are things that, from my point of view, we try to use these links to better understand the content and the context of individual pages. But it's not that we would specify 97.5% of page rank from this page goes to this page if you do a link like exactly that. That's something that might come out if you do a lot of experiments, but it doesn't necessarily mean that it'll stay that way or that it's something that you should really focus on. My general guess with a lot of these questions is you can spend a lot of time trying to figure out how exactly page rank might be flowing on Google's side. And you could also be spending a little bit of that time to improve your website. And if you improve your website in kind of a natural way, then you'll have a much stronger effect than if you figure out exactly which percentage of page rank goes from one page to another. And like I mentioned before, we use a lot of other signals other than just links when it comes to ranking. So you're just kind of focusing on this one tiny part. But anyway, let me take a look at some of the questions here. Does all page rank and anchor alt attribute text go through redirects? So I guess so, or more or less at least. Essentially, with a redirect, what happens is you're telling us one URL got replaced with another one, and then it's a matter of canonicalizing to that redirection target. So pretty much all of that just works. Is a text link better than an image link with relevant alt attribute? Either of these links are fine. Again, it's something where we work with the natural web, and we try to do things that work well on the web. And people link in lots of really weird ways. If I have three links on page A that all point to page B, will it be the very first link found in HTML that Googlebot uses or all three links used by Googlebot? Again, this kind of goes into the same thing, like which of these links passes the most page rank and where can I get the most signals? And from my point of view, you can probably figure some things out through experimentation, but I don't think it'll be that useful. So I would continue to build a website that's kind of natural. And if you link three times to the same page, that's fine, that happens. If I have links to products on a category page, and the links are normal AHF links, however, the links contain both an image and a text, is it OK to keep that set up, or is it better to split those links so that there are two links per product? And one link is image, one link is text, totally up to you. Some people have one link, some people have two links, I don't think anything matters in that regard. Often websites have links and footer on all their pages linking to service pages, like terms of service, contact us. These types of links normally never drive traffic in organic search. I think the question goes into should I block some of these from passing page rank? Like I mentioned, pretty much all websites have these kind of links, and we have to deal with them. So I don't think it makes sense to do anything fancy to try to hide these type of links, because we have to deal with them anyway. It's something that our algorithms are pretty good at figuring out. So I wouldn't do anything really crazy with those kind of links. Do opt-in pop-ups or chatbots affect SEO directly or technically other than by visitors being turned off? Would delayed pop-ups or exit intent pop-ups have less impact on SEO, if any? So I think we talked about this before, like that the natural link is the kind of the person that comes to your website and really likes your content and links to it. And on the other hand, you're saying, like, I have this cool feature that I can put on my website that will annoy my visitors. I don't know how those two sides really fit together. From my point of view, if you're kind of purposely annoying your visitors, that's not really a good move. So I primarily try to really highlight the value of your pages first. And then if there's something that you can kind of upsell to the user later on in their journey across your website, that might be something that you can use. But if you try to upsell them before you actually show them anything, that generally doesn't work out so well. With regards to SEO, what happens here is sometimes that we'll see this as a kind of an advertisement on a page and we'll think, well, the above the fold content doesn't have a lot of details about what this page is about. So maybe we shouldn't be treating it as relevant as other pages that we might have. So that might be something to keep in mind from an SEO point of view. But I do think the non-SEO aspect will be much stronger here than anything SEO related. Is there any negative impact when using a pseudonym as a blog author in general or in the about us page? That's something you can do. I think that's generally fine. I would, however, kind of similar to the previous question, focus primarily on your users and think about how your users would react to that. And when you look at literature, there are lots of kind of well-known pseudonyms that are writing good content. When you're talking about something that you want to make sure that the person who's writing about it is qualified and really an expert on this topic, then maybe using a pseudonym is not really the best approach. Like if you're writing about medical content or if you're writing about, I don't know, some financial information or security advice for websites and the person who's writing is Dr. Know-it-all or something where it's not really a medical doctor, it's just kind of this made-up name that you've created for this persona, then that doesn't really bring across to users that they can trust this person or they can trust the content on this website. So that's, I think, something you kind of have to balance there. In the Search Console API, we've been noticing data loss and UI discrepancies. Is there any timeline or any information about this upstream data issue? I don't know. I think you asked about this before. In general, the Search Console API uses the exact same data as we use in the user interface. So if there's data that is missing in one part, it should be missing in the other part as well. I think around April this year, we had a bunch of indexing issues where things kind of got stuck and we lost some data. That's something that was also stuck in the UI. So that's something where you would see the same kind of issue across the API and the UI. I'm not aware of anything where, kind of by design, things are different than the API because we try to use the same data sources for both of that. Otherwise, it's not really data efficient. I set up my AMP page with Google Analytics traffic code. It works well, except for the real time. I think you tweeted about this. And I sent you to some of the AMP folks there. I really don't know how the AMP Google Analytics setup works. So I'm not sure if you'd see that there or how you need to plug that together. I don't know. Is this canonical reference correct in reference to publisher news site? Site A indicates some stories from site B. Site A references AMP with proper AMP HTML. Site A AMP adds canonical to site B. Is this a bad practice? You can do that. So in general, there are two configurations that you can do on the one hand that you have kind of the normal link rel AMP HTML back to the, I don't know what you would call it nowadays, legacy page. Legacy page. And from the legacy page, you have the canonical back to the preferred canonical version that you want. The other approach would be to go kind of have the link rel AMP HTML to the AMP page. And from the AMP page, you have the canonical back to the preferred canonical that you want. So both of those approaches essentially work. They roll up to kind of the same setup. Sometimes it's a matter of how your CMS is set up with which one you choose there. But essentially, that's doable. For publisher websites, which work on subscription model, paywall content, the serving the entire content to Googlebot fall under Google guidelines of content is served via paywall structured data with the user seeing no content or only snippets. And there's a link to, I think, Twitter or something. In general, that's fine. Yeah, that's the old first click-free, and it's called something else now. We have the documentation in the Help Center as well as the developer documentation on the markup. And that's something you can do. Usually it's a matter of finding the right balance between showing some content to users when they come to your pages so that they understand this is actually a good website, and they continue to kind of come back. And also limiting the amount of content that you show them so that you encourage them to actually sign a subscription or sign up somehow. And finding that balance is sometimes tricky. Sometimes that's something that you can also do dynamically where you say, well, I recognize this user has been coming back to my page, I don't know, once a week. Maybe I can encourage them this time to actually subscribe, whereas other people you might recognize that they've never been here before, and you want to show them the content first. That's, again, totally up to you. Does mobile-first indexing affect my mobile SEO? Yes, I guess. So with mobile-first indexing, we only index the mobile version of the page, and we use that for both mobile and desktop search results. So if there's something that's only available on the mobile version of your page, then that will apply to your mobile search results as well as your desktop search results. Does domain migration affect my SEO? Usually yes. So at least in the past, it was definitely the case that it was more visible in that you would move to a new domain, and it would take several months for everything to settle down. And nowadays, our systems are pretty good in that if you move to a new domain, you follow all of the steps in our Help Center. Then sometimes within a day or two, things will just switch over, and you will see pretty much no change with regards to search. But any domain migration, any bigger change of URLs within a website are always kind of a scary situation for me still. So I would really be careful about what you do there, and make sure that you really follow step by step the Help Center guides. There are also some really comprehensive external blog posts that cover all of the steps as well, with checklists and all of this, and really go through all of that. You might look at some of these checklists and say, oh, I don't need to check all of the URLs on my site. I know I can set up all the redirects. But on the other hand, following a checklist like this really makes sure that you're making sure that everything is working so that if something goes wrong in the end, you can go through a checklist and see, well, I skipped this step. Maybe it's not something I should just skip. Maybe it's something I should actually do. Or if you end up getting help from other people, like from Googlers or from external webmasters, having that checklist and saying I did all of these things helps them to figure out which parts you might have skipped or which parts you might have done incorrectly. So that's something where if you do everything right, a domain migration is just a matter of a day or so, and everything will look really effortlessly. But there are lots of ways that you can do things wrong, and there are lots of other types of migrations that you can do which do end up taking a much longer time. Hey, John. Hi. So for migration like I mean, so I've been following this topic for a while. So basically, I'm kind of researching while doing these migration from one domain to another. So it will be Google, Google able to detect all those signals which are present for the old domain. Will it be passed to the new ones, especially in the case of like, let's say if the old domain has some sort of knowledge panel, there will be some very high citations from the various publishing websites. So how Google will see in that case? So if we can recognize that this site has moved completely to the other site, then we will forward all of those signals as well. So all of the kind of things that are associated with it, which could be knowledge panel entries, could be featured snippets, all of that will move to the new domain. This works fairly effortlessly if it's really a move from one domain to another, and everything else within the website stays the same. So things like internal linking, the URLs themselves, all of this is essentially the same thing, just a different domain name. Then we can switch that over immediately. On the other hand, if you change things within the URL, like if you move from .html to just no ending on the files, then that means we have to reprocess the whole website to be able to understand it again. So that's something where in the end, the state will probably be the same. But it just takes much longer to get there again. Tax is useful. Cool. Let's see. I think a handful of new questions, which is good. We're sending our pages, job posting in the index API, but just a small part of the website, should I keep sending the rest of the pages with the XML sitemap, what to do with expired pages, like the job expired? Should I redirect those pages to a directory, or is it fine to put a 404 or 410? So it's fine. Just use the indexing API for a part of your website. That's perfectly fine. The indexing API only applies for job entries and, I believe, live stream data, which nobody asked about live stream. So maybe that's not as relevant for most sites. Putting the rest of the site's content in a sitemap file is perfect. In a sitemap file, I recommend having the last modification date set and making sure that that really matches the primary content on the page so that we can trust the last modification date and use that to pick up normal HTML changes as quickly as possible. With expired job pages in the indexing API, you can submit, I think, I don't know what the exact name is, but you can submit pages that don't exist anymore. So with the indexing API, you can say, well, this job is now filled. It's no longer available. Please remove it. And we did that specifically for jobs because that's something which is really annoying. When you submit to a job and you find out, actually, it's no longer there. Using the indexing API for that is great. Returning a 4.10 or 4.04 for those URLs is perfect. You can also redirect to something else. Usually what happens with a redirect is we will see that and treat it more like a soft 4.04. So it's essentially internally kind of the same as a 4.04 page. When it comes to 4.04 pages, in general, I recommend making sure that it's a useful 4.04 page so that when users go there, they get some information about what they can do as a next step. So if someone linked to this job entry and it just returns a server error, 4.04 page, then that's not very useful. But if on that 4.04 page, you say, well, oh, this job was this kind of job. And I have other kinds of jobs which are kind of similar available. And I can just list those on the 4.04 page. Then that might be a better option. Are you German? Yes, I'm German. What's the best advice you would recommend to anyone regarding to coming with an idea for a new website? One example I heard is from Steve Jobs, where he said, we make what we want for ourselves looking for more wisdom like that. I have a hard time competing with Steve Jobs. I think that's generally a good idea. Just from a personal point of view, I have a lot of websites that I started but never actually finished up. So I think I'm probably the wrong person to ask about advice for a new website. Usually what happens is I come up with this really fantastic idea, and then I go out and get a bunch of domain names, and then nothing else happens. But maybe things work better for you. All right. I think that's pretty much it from the submitted questions. Hi, Yong. More on the chat as well. Yeah, or otherwise, just go for it. Hi, Yong. I have a question I got talking about schema game. And when the search intent is local search, we see the map. And then we usually see these kind of directories, which is the importance of schema there. And does it make sense to make a directory just for a city? Because I see usually these very big directories, for instance, talking about lawyers. And there's quite big directories talking about lawyers in Spain. And it's almost impossible to overcome this situation. Does it make sense to go local and with schema? It's hard to say without looking at specific examples. So it's kind of tricky. And I don't know what kind of website you would have. Is it also kind of a directory site, or is it like a single lawyer website? Yeah, a single lawyer website. So there are some things that you can do with structured data on normal business websites, things like specifying the location, the phone number, the opening hours, all of that is kind of useful. That usually flows into things like the maps listing, the maps one box that we have. So that kind of helps a little bit there. I don't think that would specifically change the overall rankings of the page within the search results there, though. So we might be able to recognize a little bit better. Oh, they're looking for this type of business in this city. And we can recognize that this website is actually this type of business, and it's located in that city. Then that does help us to adjust the rankings a little bit. But it can sometimes be hard to kind of pass the, I don't know, bigger competitors that are out there that have a lot more time and money to invest into these kind of topics. So that's something where I don't think there is this one simple answer that will get you to rank above these kind of sites. Sometimes what's useful? Sorry, the tricky thing is I usually find these kind of directories. But for other queries, I don't see them. Maybe I think the search intent is investigate before going to this place. And sometimes it's not. Yeah. Yeah, I think that's the other aspect there. Sometimes it makes sense to focus more on other queries where you have less of that specific competition. So that might be an approach. Thank you. Hi, John. Hi. Yeah, I have a question on an ordered list. So what's happening? Due to some CMS limitation, the list is getting break in between. And it's retaining the last number and getting continued. I pinged in the chat so how the code label is and how the user can see it. So my question is how Googlebot is going to read this as a single list or two lists. I don't know. I think I really don't know. So on the one hand, it's something where probably there is not a strong direct effect there. So possibly what could be happening is that we would pick this up and try to show it in a featured snippet. So for example, if the title is like how to do something specific, and then you have these three steps, then we would probably be able to pick out those items regardless of if they're in one kind of list element block or if they're in separate list element blocks. And we try to pick that up even if they're not in list element blocks as well. So we try to understand what the actual steps are. And sometimes the steps are not in list element blocks, but rather with, I don't know, H3 or H4 headings, those kind of things. So that's something where we try to be as flexible as we can in pulling out this content and recognizing that appropriately. I think it's primarily a matter of what it ends up looking like in the featured snippets and not something that would affect how we would rank the site overall. So people use weird HTML structures that work well in browsers but that are really hard to understand. And for the most part, we can live with that. We can work with that anyway. And if you're seeing this in a featured snippet and you're seeing it wrong in the featured snippet, you can change the HTML and then resubmit it for indexing. And you should be able to see that change fairly quickly. Thank you. Hi, John. Hi. Hi, John, this is Akhila. So I have a question and search concern. So recently, I'm facing two kind of errors. One is like submitted URL marked at no index. And another one is a submitted URL seems to be soft 404. I'll come to the submitted URL seems to be soft 404. Though my page is getting live, it is working properly, but still I'm facing the error like soft 404 error. Why it's coming like that? So soft 404 means we look at the page and we think that it's not something that is indexable. So that could be something where there's an error being shown visibly on the page but it's still returning 200. It could be something where a page is no index or it's redirecting to another URL where we say, well, it doesn't exist anymore, so we will treat it as a 404 internally. OK, so the next is the submitted URL marked no index. So previously, we did a no index for 1,000 pages. Then later, we removed it. But now, in the coverage issue, it's still showing the what, the no index issue. Then later, I added that XML sitemap and then what robots.txt to make the board to crawl. But even after that, the error was persisting in the same. So how to recover from that? Sometimes it takes a while. So making a sitemap file with the new last modification date, that's a really important step. What you can also do for individual URLs, if it's just a handful of pages, is to use the inspect URL and then resubmit them there. Then that's kind of the fastest approach. With a sitemap file, we use the sitemap file to help us to crawl better on your website, but it doesn't replace all of the normal crawling. So it's not that we will automatically jump out and crawl and re-index all of these pages. But usually, it's mostly just a matter of time where you make a bigger change on your website. And it takes a couple of weeks, maybe a couple of months, for everything to be seen again by Google. So can I ask that? Yeah, thanks, John. But usually, it's mostly just a matter of time. Oops, a little bit of echo here. Cool. Any other questions from your side? So John, can I ask another question? Sure, go for it. Yeah, previously, people were asked about the domain A and domain B of the redirection department, OK? So I have two domains. In domain A, I have just read out everything to domain B. So domain A have 1,000 pages. In domain B, also, I just created 1,000 pages. I read out the entire part. So do the Google consider that it's the same kind of stuff, or it will, in the ranking factor, it will reduce anything or something? So you're redirecting one domain to another one? Is that correct? Yeah, yes. The whole pages, in the page domain A, what are the pages I created in the page domain A? The same page I created in page domain B, and each and every page got redirected properly. So do it affect the ranking, anyways? I think that's a good approach. So what will usually happen if you merge two websites, like if they were independent before and you redirect one side to the other one, is we will have to reprocess that. And usually, that results in us ranking that one version that's left a little bit higher. It's not the case that you can just add up the ranking from both sides, and then suddenly you're number one for everything. But we reprocess it, and over time, that should be reflected there. Merging websites like that is, again, one of those situations that is trickier. So moving from one domain to another is something we can do fairly quickly. But if we recognize the old domain has existed before and we're merging something new into that, then we have to reprocess everything. And that takes a couple of months. Yeah, thanks, sir. Sure. All right. Looks like we're running out of questions. That's fantastic. Anything else on any of your sides that you'd like to mention? No? OK. Yeah, yeah, another one. Yeah. Good. Yeah. So in a coverage issue, there are some excluded URLs. So what is that? Why it is getting excluded? In Search Console, the excluded URLs. Yeah. I think that's kind of a bit tricky, because these are basically URLs that we don't use for indexing. And a lot of times, it's normal that we don't use them for indexing. So if a page has never existed, then if someone links to that page, then we'll say, oh, we would have indexed it, but we can't index it because it's a 404 or we can't index it for something else. And for the most part, that's not something that you need to worry about. It's more we're giving this to you for your information so that you know about that, but it's not something that you need to fix. It's not an error on your page. Thanks, John. Cool. All right. Oh, yeah. Hi, John. Hi. Hi. I have one question. Like is there any best way to appear in Google Discover? Or like, suppose I have a mobile-friendly page and a good amount of content, but still my pages are not appearing in Google Discover. So what can I do to appear my website in Google Discover? I think that's a fascinating question because Google Discover is one of those things that doesn't have a query associated with it. So it's not that people are explicitly looking for your website, but rather that we think this website might match their interests. So with that in mind, it's something where you almost need to think about which users you want to talk to. And how you can provide your content in a way that would be interesting for them. And what kind of topics might they be interested in? Like I have one website related to hedges. And some of the pages are appearing in Google Discover and getting a lot of impressions and clicks. But many of the good length or good amount of like 2,000 words of amount of content, some contents are not appearing in Google Discover, but some are appearing. No, I think that's a matter of kind of finding that magic spot of you providing content that is interesting to people at that moment of time. The one thing that I would recommend doing, though, is thinking about how you appear in Google Discover. So I think at the moment for AMP pages, for example, if there is a large image associated with it, it has a really nice big image associated with it. So that might be something that would make sense for non-AMP pages. So it is necessary to implement AMP for Google Discover, or without AMP, I can appear in Google Discover. Yeah, you can appear with normal HTML pages as well. But with normal HTML pages, if you have large images associated with that, I believe there's a form in the help center that you need to submit so that Google knows that it can use the large images and show them in Discover as well. With AMP pages, you're explicitly saying, this is my image, and I want you to show it like that. And for normal HTML pages, you kind of have to opt in to that approach. OK, thank you. Sure. Hey, John, I have one quick question. I mean, if you have a few minutes. Yeah, so I was kind of wondering, I mean, Google already have a kind of recommendation, or not a recommendation kind of, though. You're saying, I mean, we are not recommending any specific version, or whether it is a CCTLD, or to go by the, let's say, sub-directly model. So I mean, from SEO point of view, I mean, what should you recommend if we go by the international markets? So I mean, there is a pros and cons of both sides, but I mean, specifically on the Google side, but they can think of, OK, if somebody wanted to expand their businesses, so what should we do? Good move. So my recommendation there is always to use fewer URLs instead of more. So even if you're expanding globally, then instead of going out and buying all country top level domains and putting websites out there, I would think about really how you want to expand step by step, and what kind of structure you can create that lets you expand kind of step by step. And that could be instead of buying all of these different English-speaking domains, for example, maybe you just have one strong English-speaking domain. Because anytime you split things up across a number of different domains, or a number of different sites, or a number of different URLs even, you're kind of diluting the value of that content across all of these different URLs, and they have to rank on their own. So if you have one page for India, one page for the UK, and one page for the US, then these pages are all having to rank on their own. Whereas if you have just one page in English, then that page can kind of be as strong as all of these three or four or whatever other pages together. So that's kind of my approach there. I think there are also a lot of things that you could think about with regards to your users, like do your users expect to see a local domain or not? And that's something where you have to do research almost. Other things that might play a role are kind of policy and legal questions. Do you need to have a local domain for individual countries? Because there's some legal requirement that you need to do that. And if you have these kind of legal requirements, then obviously SEO is kind of a second priority. It's not your first priority to kind of go out there and say, well, it's illegal if we do it like this, but SEO says we should. So therefore, we'll just do illegal things. I don't think it works that well in real life. So all of that said, I don't have any obvious answer that fits all models. I think it's really something that depends on the website. I tend to have fewer URLs rather than more URLs. And if this is the first time that you're working on an international website, I would definitely get help. There are lots of really good SEOs that focus on international SEO. And at least for the first time, make sure that you're getting all of the help that you can get so that you don't end up creating something really complex and having to tear that down and changing its significance. Thank you, John. All right. OK, maybe we can take a break here. I'll set up an end. All right, one more, one last question. I was in queue, actually. So sorry. So John, this was regarding one question. I was just going through some documents where I was checking that FCF, first click free, has been removed now for all the websites in SCRP. Is it true that now it is not like FCF should be? It's different now. It has a different name. I forgot what the name is. Flexible sampling. So I'm talking about first click free. Yeah, we changed that model so that it's no longer based on first click free. It's called flexible sampling. Essentially, it's the same thing. But the way that you set it up is by using structured data on pages. All right, all right. And one more small thing I wanted to know. I know that adding pop-up is spamming some kind of spamming for users. But it was a big discussion with the product team, and they were interested to add three pop-ups, you can say. Now we have somehow removed them to two or one. So is it really required like we should tell them that please remove all the pop-ups or some pop-up from marketing perspective is OK for Google? I think I'm not sure how we have that phrased in the blog post. So we did a blog post about that specifically on mobile, I think last year. I would double check to see how we framed it in that blog post. I think it's a matter of how much of the divisible space on the page is still available. So that users, when they go to your pages, they can still recognize that there's actually that's what they were looking for. Cool, thanks. All right, cool. OK, let's take a break here. It's been great having you all here. Great that you can stick around a little bit longer. And thanks for all of the many questions. I'll set up the next batch of Hangouts later today. Next Friday, I'll be doing another one with local folks here from Zurich. And we'll see how that works out. And I think that's pretty much it from my side. Hope you all have a great weekend. And see you next time. Bye, Jeff. Bye, everyone. Bye. Bye, John.