 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a search advocate at Google. And part of what we do on the search relations team is to reach out and talk with webmasters and site owners about any issues they might have with regards to their websites and Google search. Bunch of things were submitted already on YouTube. But like always, if any of you want to get started, feel free to jump on in with the first question. Hi, John. Hi. I have two questions, actually. So one of our clients, they had e-commerce website. And they have a blog as well. And both websites, they set up on WordPress. Now, what we did, we have a lot of content on blog, which are really high quality content. And people are giving link to our blog as a reference. Now, what we did, we created some interlink from blog posts to product page and product category page. Now, the back link we are getting to our blog post, will that help to rank our product page or product category page organically? Yeah, that should work out. Now, that's essentially kind of, I think that's kind of a best practice almost. Like you write about something that's kind of related to those products or related to your industry, to the type of work that you do on your blog. That's a good place to kind of put out regular content. That's content that's perhaps easier to attract links for. And from there, you can link out to the relevant parts of the rest of your business, essentially. So from that point of view, I think that's a perfect setup. Now, the next question is, I have got this one of my client today morning, actually. So he wants to set up his e-commerce website on Shopify, but he wants to use WordPress for the blog. So he needs to create a subdomain for the blog section. So the question is, if we do a subdomain and get the back link for the blog post, will that still help the product page and product category page for the organic ranking? Sure. So it will be no effort. Yeah, I think that's perfectly fine. Some sites even have the blog on a separate domain for whatever technical reasons. That's essentially up to you. I think, in general, having it as close to your primary site as possible makes sense, because then also for users, it comes across as one big structure. And then it's kind of clear that, oh, this link from this one part of the site links to a different part of the site. And it doesn't come across as something, I don't know, unexpected, like you click on a link and suddenly you land on a completely different site. If it looks the same, if it's essentially hosted in a similar place, then that's, from my point of view, a good idea. Thank you. So John, in terms of Google understanding that even if it's on a subdomain or a different domain, I assume interlinking plays an important role for Google to kind of understand that two properties are from the same entity, from the same website, from the same company. What other things are kind of Google looking at? Is it designed, does design play a role? Should design for the blog, for example, be very similar to the website, hosting, things like that? From my point of view, that's more a factor from the user side and less an SEO factor. So from our side, it is important that we can understand the links between these two pages. Sometimes what helps us is to understand that these are part of the same entity, which in general, like having it on the same host name is a perfect way to do it. Having it on a subdomain is something that could also work. But essentially, being able to understand, it is one bigger entity. And the main reason for that is that we can kind of build up the trust for this site and understand, oh, this is actually a very good site. It has a lot of information on these topics. There's a lot of information here. And we can understand the overall structure a little bit better. And for that, it doesn't need to look the same. It doesn't need to be hosted on the same setup, on the same network or anything like that. It's really, from our side, I'd say mostly a matter of being it on the same domain so that it's clear for us that this belongs together. The Shopify example is a good one, since you cannot use a subfolder, for example, with Shopify, especially if your blog is on WordPress. You cannot go to Shopify and host it there. It's not going to be hosted in the same place, because obviously, they host it on different servers, different IPs. So this is what I was mainly curious. What kind of signals can one give to Google in order to make sure that Google understands this is part of the same entity? Like from a technical perspective, obviously, from a user perspective, it's important as well. But I assume, as mentioned, interlinking and things like that play a good role in Google understanding. These are not separate. These are part of the same entity. Yeah, exactly. Cool. Any other questions before we get started? Brittany? Yeah. Go for it. So I submitted again on the YouTube channel, so you might have had a chance to read that and shine them last week. But yeah, I've hit the domain migration with over a 90% traffic loss going on a month with no major culprits that we've been able to identify. So I had another theory, potential culprit, I could run by you. So one of the things on our mind is our site relies on a lot of JavaScript. That's always been true, but we had our old domain on pre-render, using pre-render until June. I won't get into exactly why we stopped using it, but we did stop using pre-render in June. And so that was Terrell Summer with the domain migration at the beginning of August, still off pre-render. Domain migration was at the beginning of August, and we had all of our traffic loss. And we were wondering if maybe taking this set off pre-render in June was not the right move, and there was maybe a bit of a lag in Google totally processing that, the domain migration made it worse, not totally sure. But we did put our new domain back on pre-render as of basically the beginning of September, hasn't helped with the traffic loss at all. Wondering if you could give your opinion on whether this is a potential culprit. And if so, if when we might potentially see recovery, what we could do to help it, or if we are once again pulling out a thread that is not actually going to help us. Yeah, I don't know. So from the timeline, you basically have a client-side rendered site, in that case, a JavaScript-based website. And then until June, you had it running on pre-render. So it's serving the pre-rendered version to Google. And then you kind of turn that off and let Google do the rendering, essentially. And then in August, you set up the redirect through the other domain. OK. I think that makes it more complicated, but I don't know what the actual effect there might be. So that's something where I'd probably have to take a look to see what all was kind of happening. And during that time from June until August, was it fairly stable? Or were you seeing changes there, too? Well, we definitely did see a bit of a dip, but nothing absolutely crazy. But we saw a traffic loss maybe at about 15%, starting after May, which is when we started moving off pre-render. That's also, of course, right after the May algorithm update. So we had originally attributed it to that. And that's fine. And we were trying to learn from that. And sort of normal run-of-the-mill stuff. But then, of course, when we did the domain migration in August and ended up with a 90% traffic loss, that made us reflect on if maybe that change was more tied to pre-render than the core algorithm update and just try to focus in on that area or any area we could. OK. And the domain migration, how did you set that up? Because if it's a JavaScript-based website, did you just redirect all the URLs to the new one with server-side redirects? Or was there something on the JavaScript side that you did as well? Yeah, I'll speak to it as best I can. But yeah, we set up, from my perspective, standard 301 redirects. We validated via Webmaster Tools. We've had four different SEO agencies looking at everything. And first thing they checked is our 301 redirects. And I've said everything looks golden there. Our SEO agency, as they picked up on things that aren't perfect, but all of them are a little bit at a loss to say how any of those smaller things from their perspective and our perspective could lead to a 90% traffic loss. So I feel like we did that part pretty solid, but I can't say that with full confidence. Yeah. Um, I think it makes it a little bit more complicated, but it shouldn't be such that you would see such a big drop. So in particular, what could happen, I don't know if this is something that happened in your case, but what could happen with a client-side rendered website if it's based on JavaScript and needs all of the JavaScript to function properly. Then when you redirect it, because of the way that we cache all of the resources that are used for rendering, it might be that it's a little bit trickier to handle a site move situation, because then we might have the new URL and we try to render that based on the previously cached old content. And depending on the way that the move is set up, it might be that rendering fails, of that kind of mixed set of information that we have. So if you have a pure static HTML page, when we move that over, we have the static HTML page with everything, but if it's based on JavaScript, we need to be able to process the JavaScript to see the full content. And we cache a lot of the content that is pulled in with JavaScript. So it's possible there's some kind of clash, perhaps, that happened there. But that would be something where you would see issues with regards to indexing, where you would see that really like the number of index pages is going down. It's not something where I suspect you would see kind of just the traffic drop. Yeah, I mean, our number of index pages is also way down. I mean, we sort of dropped off the map entirely with the old domain. And the new domain didn't really pick up more than a very small amount of keywords compared to what we used to rank for. So I guess in this case, do you have any specific suggestions for what we could do from here? So like I said, the site has been back on for render for about a week and a half now. And that hasn't impacted our rankings or traffic. Yeah. OK. So I passed it on to the ranking team to double check what was happening there last week. I haven't heard back anything in particular yet. But I kind of need to double check just purely from a technical point of view that we were able to even process all of these things. So kind of to determine, is this more of a technical indexing thing that we can work out? Or is there something from ranking that's actually harder to solve? Because the technical issues are things that usually you can fix them, like you kind of moving back to pre-render is something where, personally, you probably don't need to do that. But it reduces the number of risky kind of connections that you have there. It makes it a little bit more stable in that regard. So that's something where I'd say that's probably a good move. But I don't know if it's really purely a technical thing that needs to be worked out or if there's actually some ranking or quality issue that we need to work out. OK. No. But I was hoping I'd get more information by now from the team. But sometimes it's not that trivial. Sure. I appreciate that. And an excuse for me to join you again at midnight my time. So thank you. Yeah. OK. Have a good night then. If you want to drop out, no problem. I'll stick around. OK. Cool. All right. Let me look at some of the other questions that were submitted. We do a lot of testing on our site. And the bots are seeing all of these tests. We're planning quite a big test, which involves drastically changing the menu. This would have quite a big impact on SEO. But we need to have the test for UX purposes. We don't want to roll out this change. If it's not permanent, what should we do? If we block Googlebot from seeing the test, will it be considered cloaking? If we roll out the test for everyone, users, and bots, what would be the maximum period of time we would run it without impacting SEO? So first off, I think doing tests like this for usability, even for SEO sometimes, is really important. And I think it's something that everyone should consider doing and try to find ways that they can do that. Because sometimes you can only make so many assumptions about your website, and you really need to test to see what is actually the case. Do users respond to this type of menu or that menu better? Can we bring our information better to users if we present it, I don't know, in a different color or in different font or in different layout? All of these are things that you can't really know ahead of time. You almost have to test them. So doing this kind of A-B testing is really important. And understanding how you can do that is hard, like figuring those details out. We have, I think, a help page on A-B testing, which has a little bit of information there. But in general, if you're doing something that you would consider a temporary A-B test, it's fine to include Googlebot in that. It's also fine to say, well, Googlebot is a special category, which might be a category that you base on, I don't know, geographic location or on language settings or on capabilities that this device has, and then kind of do the A-B testing that way so that you have a more stable version that you show Googlebot for crawling, indexing, and ranking. So that's kind of the direction I would head there. If you do end up using a separate version for A-B testing, make sure you have the rel canonical set to your preferred version that you want to use for indexing. If you use the same URLs, then that's less of an issue. Also, keep in mind that Googlebot doesn't use cookies. So if you use cookies to set an A or B version, then Googlebot won't keep that and won't return that cookie, which means if you're not watching out, you could end up in a situation that you serve Googlebot a different version every time Googlebot crawls, which is probably a bad thing when it comes to SEO, especially when you're changing things like the internal linking, the menu within a website. So those are kind of the things that I would look at there. It's something that's definitely not trivial to do. I know there are some frameworks out there that make it a little bit easier, but it's worthwhile to kind of think through all of the different variations and keeping in mind some of the restrictions from crawlers in general with regards to how they handle cookies, how they deal with redirects and different URLs, all of those things. My old Hindi blog was getting huge traffic, but after I created some forum and do follow comments, backlinks from high DAPA foreign websites to my Hindi blog, after that my ranking is going down day by day. And a Google Core update also decreased my ranking by 50%. In one year, three core updates hit my blog. I disavowed some of those links, but still no improvement is showing. Do I need to disavow all of those links? So I'm kind of skeptical about the links that you place there, because it sounds like these are random links that you're placing on other people's websites, just with regards to SEO. So if you're kind of looking for high DAPA, which is domain authority, it's a metric that Google doesn't use. It comes from a third party. If you're using sites like that, then probably other people are dropping links there too, and then probably we're going to be ignoring those links completely. So my feeling is those links are not going to be the problem, but they're also not doing any good at all for your website. So that's a lot of work that you're doing, which essentially has no effect. So I would kind of recommend stopping just dropping links on other people's sites. With regards to the ranking in general, my guess here is that it's mostly based on the content of the website itself and on the way that we see that website with regards to kind of the rest of the search world. So that's something where I wouldn't assume that those links are necessarily causing problems or that the core algorithm update is somehow magically changing things for your website. But really, it's more a matter of your website itself, the content that you provide, and kind of the overall value that you bring to the rest of the web. And the question kind of goes on. I'm hosting my blog on Blogger with a custom domain. Do I need to migrate it to WordPress? Or is Blogger still OK? Blogger is perfectly fine. So there's no advantage that you would see in a case like this from moving to any other CMS. One of the really big advantages of using Blogger is that you don't have to worry about the infrastructure. Things essentially just work for you. There's a team of engineers who work to handle any outage that comes up who make sure that all updates are installed, all of that. So that's something that if you can save yourself that work, that's always worthwhile. But again, I would really focus much stronger on the content. I don't know your website here, so it's hard to say. But it's something that we regularly see when kind of a blog where people are just publishing content all the time, trying to get traffic. When that drops significantly in ranking or invisibility in search, then often that's due to kind of the content not being as fantastic as we would have expected. I was wondering about when these big announce core updates happen. Are there some parts of the linking algorithm updated as well? Like you discovered that this site received great links from great sites. Would you say that this is a better site now because it got such recommendations so that the big core updates will reward it and show it more to users? So in general, a core algorithm update, the way that we have it defined, is somewhat vague because it's not that we have this one piece of machinery that is kind of the core algorithm. And when we change one screw there, then that's a core algorithm update. But rather, we have so many different algorithms that work in search. And when we make significant changes across a number of them or significant changes in the way that we interpret them, then that's something that we would call a core algorithm update. So from that point of view, it's not that we would say, oh, the way that we handle links never changes or the way that we handle links always changes when we make a core algorithm update. These changes can happen essentially at any time. And they can also coincide with core algorithm updates. The way that we process links is something that is continuous. So it's not that we have to wait for a specific time frame to see the new effect of the links. But rather, when we see links on the web, we can take them into account essentially immediately. In the HTML spec, P is a logical element. Is that the same for Googlebot? I mean, a situation where one paragraph contains one theme is much better for robots understanding than a stream of thoughts in one big P element. So this is an interesting question, because I never really thought about this. But from my understanding, we don't do anything special with these kind of paragraph delineators or a div, for example, is something that people sometimes use for this as well. But rather, we kind of see it as a chunk of text. So it's not that we would say a paragraph separation. It means something unique. But rather, when we render the page, when we pull out the HTML, when we look at how things are semantically structured, when we see that this is one collection of thoughts, essentially a number of sentences that are all together, then that's something that we can kind of assume belongs together with regards to understanding that page. So I wouldn't say that we go out of our way to semantically understand that this is a paragraph, but we do try to figure out what a paragraph is. So I guess in a way, you could say we do figure it out. I don't know. It's weird. But I don't think it's the case that we try to turn a P tag into something more than kind of a separation of paragraph. One place where this sometimes goes a little bit weird is I haven't seen this in the past recently. But previously, when people were using things like table-based layouts, then the table cells themselves are also kind of a paragraph. And sometimes what we saw is that people were using a table-based layout to actually structure something that belongs together. So you could imagine a situation where you have a table set up, and a sentence is across a number of different table cells, which you're using for whatever design purposes. And for us, when we look at that, we see a collection of individual cells. We don't realize that actually these cells belong together and make up one big text piece. Or similarly, you could imagine a whole paragraph being split up into individual table cells. And that's something where at least when people were doing this more often, this kind of table-based layout, it's something that we did run into every now and then, that we couldn't work out that these things are actually next to each other and they belong together for understanding of a page. But with all of the modern CSS solutions out there, I think that's a lot less of an issue. It seems Google recently launched an update for Search and News tab, something that many were complaining about for the past nine months. Likewise, I also noticed that Search Console is now showing performance of links for a news section. Has that always been there, or is that also a new thing? I'm unable to see any performance insights there. So I don't know what all is happening on the news side. I do know that they're working on some things to catch up with whatever they've been struggling with there. So hopefully, for those of you who are waiting for this, hopefully that will be a little bit better in the future. But I don't really have any details there to share. My understanding is that they'll probably also post something in the News Help forum at some point with more details. With regards to Search Console, we did start including the news information in Search Console, but that's, I believe, News Mode in Search, not Google News. So it's slightly different from just pure Google News. And the data that we pulled in there, I think, only goes back to July or something like that from when we started collecting this data for sites. So sites that have been around for longer that have the usual 18 month of data in Search Console, they still have a limited set there for that news section. Over time, as we collect more data for these sites, then we'll have that full time period again. I wanted to ask if linking from the main content part of the old but still relevant articles on my domain to the new recently published articles on the same domain increases relevance for specific internal anchor texts and topics used in the link, or if it otherwise helps that new article? So the question is kind of complicated, but I think essentially what you mean is does it make sense to link from content on one page within your website to new pages on your website or to other pages on your website? And from my point of view, that's an obvious yes. Of course, linking between different parts of your website, creating a good internal linking structure, that always makes sense. So that's something that I would always recommend. It's good for users. It's really helpful for search engines because if we have multiple paths to crawl your website, it's a lot easier for us to crawl your website. If we can better understand which parts of your website are important to you, which is usually the parts that you link to more often within your website, then that helps us to better kind of focus our efforts on those parts as well. So any kind of improvement that you can do for internal linking is a good thing. Two questions. Let's see, when is the next Google Core update coming? The last one for me was hard. I don't have any dates for core algorithm updates. I know everyone kind of wants one, but it's something that from our side, we try to minimize the amount of big disruptions like these as much as possible. And when we do work on bigger changes with regards to understanding relevance of search, it is something that we try to significantly test to make sure that it's working well. We run a number of evaluations on our side to kind of determine where the different search results will be, if that matches what we want. We do a number of tests with external raters to figure out if we're missing anything or for kind of accidentally not recognizing that this negatively affects some things that we were worried about. And from our point of view, it's not that we would wait for a specific date and then just launch whatever we have then. But rather, we want to make sure that this is working as well as possible, and then when it's working well, then we will launch it. So it's not like a release date that we have to hit, but rather, we will launch it when things are ready. My search console is showing nothing, but I suspect my site may have a ranking penalty for some reason. So is the only way to check in Search Console manual actions, or is there another way to check this? Yes, if there's a manual action on your website, so if the Web Spam team has manually kind of needed to step in and take some action with regards to your website, then that will be visible in Search Console. I think there is an extremely small number of cases where that wouldn't be visible, but those are really kind of tricky edge cases. And it's not something that an average site would run into. So in general, if you're not seeing anything in Search Console in the manual action section, then I would assume that your website is ranking algorithmically the way that we think it makes sense. And it's not the case that there is anything manual kind of playing a role in that. Finally, I want to point out that a cache plugin named wprocket released yesterday a JavaScript delay function that solves all of the problems that AdSense generates in PageSpeed Insights. So that sounds pretty promising. I haven't taken a look. I also don't have AdSense on my blog, so can't really test it. But I know lots or some people have been complaining about AdSense and some of the other things that services and products that we have that you can embed on your site that make a page slower. And if there are ways to improve the speed for that, then I think that's a good idea. So might be something to try out. I found a snag in the Google workflow to recover account access. So I took a quick look at this question ahead of time. And I don't know exactly what it's referring to. I suspect it's tied into the Google ad side of things and not with kind of a Google search console kind of scenario. So I really don't have any insight on what can be done there. I know that, in general, the account recovery team works really hard to make sure that things work as much as possible. But if you're finding that the normal account recovery flow isn't working, then I would definitely post in the help forum. I don't know if there is now a separate Google account help forum. Otherwise, probably the Gmail forum is a good proxy for that. I want to ask, what is the best set to write in the alt text of an image that has quotes written on it? The quotes work best, or the keywords from the article, how should a person go with such images? So the alt attribute for images should be a description, essentially, of the image itself. And from that point of view, if your image is a quote, then you can describe that quote, or you can just copy that quote into the alt attribute. That's a perfect setup. So that's kind of how I would go there. I wouldn't use the alt attribute just to stuff keywords from the site in there because that's not useful for users. It's not useful for us as a search engine either, because we already see those keywords from the rest of your page. For search, what happens with the alt attribute is we use that to better understand the images themselves, in particular for image search. So if you don't care about image search, then from a search point of view, you don't really need to worry too much about the alt attributes. But if you do want these images to be shown in image search, which sometimes it makes sense to show fancy quotes in image search as well, then using the alt attribute is a good way to tell us, this is on that image, and we'll get extra information from around your page with regards to how we can rank that landing page. I'm seeing a constant influx of spam and malicious domains appearing in the search results for a website I manage. We're finding about 30 to 50 new ones per day ranking in the top 10 pages for our brand search. Most of them redirect to phishing scams and virus warnings. What's the best way of dealing with these and the most importantly, preventing them? It's quite strange that these domains are ranking in the top 10 pages for Google for a brand. When we have hundreds of popular articles that Google should be shown ahead of these, is this just a loophole that affects all brand searches? So I don't think this is something that affects all brand searches. That would be something where probably we'd hear a lot more complaints about. But it is something where depending on the kind of query that your brand search essentially is, it can happen that we don't have a ton of really good content. And even if we have more good content from one website, it might be that we just show a handful of results from one website and then the rest we take whatever else we can find on the web, which if we don't find a lot of good things, then we have to scrape the bottom of the barrel and we come up with things that aren't that great. Especially if you're talking about the first 10 pages of the search results for something like a brand query, probably users, when they're searching for your brand, they find your brand on the first page and they kind of go to your site directly. They wouldn't go through the top 10 pages to see what else is there. However, if this is something where it looks like these kind of hacked phishing kind of virus-y sites are in a particular pattern and it feels like Google is missing this pattern completely for whatever reason, then I would strongly recommend posting more about this in the Webmaster Help Forum so that the folks there can also pass that on to us if they recognize that there is something more general and broader that Google would need to do. So that's kind of the direction I would go there. Of course, with virus content, you can also report that through a special forum on Google's side, which we use quite a lot. Also for phishing content, you can do the same thing. So that can help there, but essentially, if there is nothing good that we can show for some of these queries and you're going to page 10 in the search results, then we end up with all kinds of weird things that happen to be indexed for some reason. We have a news site and we use advertising. When I look at the HTML code inside Search Console, URL inspection, I see there is a lot of CSS code injected by the ads JavaScript in the DOM. Does this affect the page ranking in any way? Usually not. So various JavaScript snippets do inject CSS. They do inject some HTML into the page, depending on what they're meant to do. That's something that from our point of view doesn't affect ranking. There are two things to watch out for when it comes to third-party JavaScript that you use on your website. One is that some kinds of JavaScript inject code HTML into the head of the page, which can result in us not being able to understand which elements of your content are actually in the head. So one example is that a JavaScript might inject something like an iframe to the top of the head of your page. And when we parse that HTML page, we'll see the iframe tag. And we'll say, oh, the head of this page is closed. We can skip all of the rest of the meta tags and things like that that are there. And then it can happen that a robot's meta tag is missed, that a rel canonical is missed, that hreflang links are missed, because we think that they're no longer in the head of the page. So that's one thing to watch out for. In URL inspection, you should be able to see that. It is really tricky to spot. And the first time you see it, you'll be really surprised and big aha moment. But it is something that can happen. And to fix it, you essentially have to work with the provider of that third-party JavaScript to let them know that injecting things into the head of the page can be problematic. The other element that can play a role with third-party JavaScript and content injected into a page is essentially everything around speed, where if a JavaScript element injects a lot of content into a page and that makes that page really, really slow, then when we evaluate the speed of the page for ranking, then that's something where we might see that, oh, this page is very, very slow. And we would not differentiate between it as you, the site owner that is making this page slow, or it is some third party that you're embedding with JavaScript that is making the page very slow. But rather, when we load your page, it is very, very slow. So that's the other aspect to kind of keep in mind. Does Googlebot ignore the container tag while crawling? So I don't know what the container tag is. So I didn't have a chance to double check before the hangout. I feel I'm missing something totally obvious. But in general, if the container tag, let's say the container tag is some new HTML element that somehow I missed, if that's a new HTML element, for example, an HTML5 or something even newer than that, then from Google's point of view, we would essentially process that as any other tag that we don't know about. So that's something where we would try to render the page if we can render the page and get the content. We render the page using a fairly modern Chrome version. So if we can render the page and get the content, then that's essentially fine. If the container tag is something that just provides extra functionality and doesn't change the content that is shown for rendering, then we would essentially ignore that. Perhaps they're referring to the Google Tag Manager container? Oh, OK. Other Google Tag Manager. Yeah. Good point. Yeah, that's also a container. It's always awkward when you don't know what it is. And it's like, am I missing something totally obvious, or did they confuse something? Yeah, so if it's Google Tag Manager, then that's something that could be tied in when it comes to rendering. You can test that with the URL inspection tool to see what happens there. And I haven't played around with Google Tag Manager for a while, but you can do things like include JavaScript, which means you could update some of the content on the page as well. So if you're using that to update the content, to add content, to remove content, then when we render the page, we'll be able to take that into account. Do you know about any updates coming to Google My Business regarding keywords as a business plan? I don't know anything about Google My Business, essentially, so I don't really know there. I have seen anecdotally on Twitter various people both complaining and asking for features around that, but I really don't have any insight on Google My Business. If you see a URL as a 200, which was previously a 404, then Google will treat this as a fresh new URL and not impact performance on the big picture. My question is, if URLs were performing well previously, does that mean it might have some kind of a score, a page authority, which would influence Google ranking? If Google sees those URLs as fresh URLs, that means all score authority will be zero. Wouldn't this indirectly impact performance? So at least from as far as I know, we don't have any kind of score that we would maintain after we drop a URL from the index. So it's possible that there are some elements that remain kind of in our index, I don't know, metadata for a URL, when a page just temporarily drops into a 404 state and doesn't really get removed from our index. But as soon as we remove that page from our index, we essentially don't have that metadata anymore. So if something is longer a 404, if it's longer a no index, then we don't have that information anymore. However, there might be a lot of signals we pick up from the rest of the web. So if that page comes back, we wouldn't know about that page anymore, but we might have lots of signals from the rest of the web that tells us this is actually a good page. So in particular, if you have a lot of internal links pointing to that page, if the rest of your site is something that we would consider very important or very high quality, then that gives us a lot of information more about that new page or that page that has come back. So it's not so much that we would keep those signals and we return them when that page comes back after some time, but rather the information where those signals are coming from, that still exists somewhere, perhaps. And when that page comes back, then we can rebuild those signals based on the information that we have otherwise. So that's probably what you'd be seeing there if you kind of test around with this kind of functionality. OK, we're kind of running low on time. This looks like we still have a bunch of questions left, but I thought I'd just open things up for you all as well. If there's anything else on your mind that you'd like to talk about, feel free to jump in. I'll also be around a little bit longer if there's anything you want to chat about after the recording as well. This is Shiva again. Last week we spoke about the news. Just wanted to update you something that we have seen an update last night in the Search Console. And previously it was listing only our site operator keyboard, but now it is listing all of our articles in the Search Console. And we are also seeing static clicks number as well in the Search Console. But it is not increasing, though, for some reason. We don't know why the clicks doesn't increase if the incoming is there. And one thing we have noticed is that a couple of our articles did surface on the News tab in the regular search. But when we go into the Google News app or news.google.com, we don't find the same articles there. So if you could please throw some light on the difference between these two. And probably if there is something coming in future, maybe in a couple of weeks, as an improvement to this, because we have come a bit far from what we have been last week. So that is good. Thank you so much for patting. Yeah, I don't work directly with the news team. So I don't have a lot of direct insights there. But I'm really glad to hear that things are looking up a little bit. If you're seeing these changes just from yesterday, then I would give it a little bit more time to settle down. Because in general, indexing changes, I assume it's the same when it comes to news. Indexing changes do take a little bit of time to settle down and to rebuild the signals that we need for that. So that's something where I'd say it's cool to see the initial jump, but you need to give it a couple of days to see where it settles down and where you start seeing data being collected. Also, the data and search console, depending on the element that you're looking at, has a little bit of a delay. So it might be that after a couple of days, that data actually starts showing up there. But let me know if you still end up seeing issues like this after a couple of days, maybe next week, then we can see if there's something unique with the news side that we need to push. Sure, John. I mean, there is a difference between the News tab and Search Console and the News app, or they are both same. There is a difference. Yeah, so the News in Search Console is the News tab in Search, or the News mode in Search. And it's not Google News. So it's very confusing, because they're all called News. But we treat them as something different. OK, OK. Thank you, John. We'll come back to you next week. Thank you so much, though. Cool. Hey, John. Hi, got a question? Yeah, I got a question regarding guest postings. I know you've covered that previously. But my question is, for very high quality guest posts, like for SEO purposes, and during lots of traffic, having original helpful content, and even ranking high on Google, should the links be no followed as well? And are those links being devalued? And the next one would be what you take on websites with large amount of articles from industry experts or professional writers that contribute great content. Thank you. I think it's hard to have a general guidance there. But from our point of view, if these guest posts are really purely there for those links, then that would be something that the web spam team would be worried about. There are lots of ways to do guest posts where you're essentially just driving publicity to your content or driving awareness of your content. And that's something where, from my point of view, I don't see that as being problematic. They're kind of two, or kind of the one way I look at it is if you, as the person writing this guest post, don't really care if that link is follow or no follow, then that's a good sign that you're writing this with the intent of kind of bringing awareness, reaching a broader audience, reaching other people. Whereas if you're saying that the only reason I would ever publish this guest post is to get a link that does not have a no follow attached to it, then that's kind of a sign that you're really just kind of trading that post for a link. And that would be a problem. And kind of focusing it on great content and really great people, smart people writing these posts, that's something that I sometimes see as more of a distraction rather than kind of the actual issue at hand there. But I do know there are some sites that have guest authors that are posting, and they write fantastic content, but it's essentially someone who's writing on that website where there is an editor involved and essentially they're working to create normal content for that website. It's not some random guest post that they get submitted that they just publish one to one. So I don't think that really answers your question, but that's kind of the way that we would look at it. Also, the other thing that is sometimes also worth keeping in mind is that when the web spam team takes a look at your website, kind of think about what they would see. It's something where sometimes the web spam team will take a look at a website and say, well, these links are all over the place. There's all kinds of linking happening here. It's like if there's something bad in between, then it happens. That's not the end of the world. But if they look at a website and they say, well, all of the important links to this website are guest posts and they're high quality guest posts on high quality sites, but all of the links to the site are guest posts, then that's kind of a sign that maybe we need to be a bit more careful here with regards to the links to the site. So I assume it's more of like a spectrum instead of like a black or white. Yeah, yeah. OK, thank you. John, in the infinite scroll documentation, that old blog post, one of the best practices mentioned there is the fact that each separate pagination page, when accessed directly, should only have products for that specific page. So products from 20 to 40 or something like that. And you shouldn't really mix products together. So the reason why I'm asking this is I have a case where if you go to page 2 of a certain category, you also have the products loaded from page 1. So you have page 1 plus page 2. If you go on page 3, you have page 1 and page 2 and page 3. And it kind of scrolls you down to page 3. So how much of a negative impact does that have? Having those kind of previous pages also loaded on those pagination URLs? It really depends on what you're using the pagination for. So in a case like this, it looks like the primary objective is to find links to the product pages. And if we find those links on multiple pages, that's fine. It doesn't matter. In cases where you have one really long article or you have multiple articles that you're loading with infinite scroll, then it's more a matter of the content itself being indexed, not the links. And if by indexing the content, we have this mix of different articles on the same URL, then that's a problem. So that's kind of the differentiator. It is the reason for this infinite scrolling page finding links to other pages. In which case, duplicates are fine. Or is the infinite scroll the matter of finding indexable content on that page itself? And in that case, differentiating between the different pages is really important. But is it the risk that, for example, you have a category with 10 pages, and that last 10th page kind of has all of the products for that category because it loads all of the previous pages? Well, would Google kind of think, well, this seems to have everything that I need, so I'll index that one and show that one in the search results? Is there a danger that might happen? I don't think so. I don't think that would be a problem. So I guess the main place where we saw this as actually causing problems is more on news websites. Content-based websites. Yeah, content-based websites, and really in particular on news websites, where if you load multiple articles using infinite scroll and they end up being on the same page, then you can end up in a situation where you have two completely disconnected articles loading on the first page, and then the title is something like, this politician said x, and then in the bottom there's an article about a car crash, and then suddenly for queries like politician plus car crash, it's like, oh, it's like this politician had a car crash, but it's essentially completely disconnected articles on the same page. And that's something where we have ended up contacting sites and letting them know, hey, this is really a problem, and it almost causes news cycles of its own because you go to Google and search politician name plus car crash, and all these articles, there's a conspiracy, something is trying to be hidden. But so from an e-commerce point of view, given that the documentation features that specific example for e-commerce, should websites try to not have that kind of issue, try to solve it, or doesn't really matter? I think it is always a good practice because then you don't accidentally run into this, but it's definitely not critical for e-commerce. Especially, I mean, for e-commerce, we really need to find those links to the product pages and the category pages, the first paginated version of the category page will almost always be the most important one because that's the one that's linked from the rest of your site. So that's something where I don't see this as being a problem for e-commerce. I think it's a good practice just to make sure you don't avoid that, but it's not going to change it for e-commerce. Okay, cool. Cool, okay. Let's take a break here with the recording. I'll pause here as always, like if any of you want to hang around a little bit longer, perfectly fine. Thank you all for joining. I hope you found this recording useful and hopefully I'll see some of you all again in one of the future hangouts. Bye, everyone. Bye, John.