 Hi, everyone, and welcome to today's Google SEO Office Hours Hangouts. My name is John Mueller. I'm a search advocate at Google in Switzerland. And part of what we do are these Office Hour Hangouts, where people can join in and ask their question around their website and web search. A bunch of stuff was submitted already on YouTube. We can go through some of that. But if any of you want to get started with the first question, you're welcome to jump in now. Hey, John. Hi. Hey, so we have a competitor who, on his website, has a great number of product reviews. And after a virtual agreement, we would like to use some of his product reviews on our website alongside with our reviews for the benefit of our visitors who would be interested in them. And they would help them to decide on the product. So we exchange with, like, NaturallyQ and credit for the reviews by providing a do-follow backlink from each of our product details pages, which will use product reviews from his website. And in the end, we'd end up with around 2,000 backlinks linking to his site from our site that has around 12,000 pages. So both our and his websites are well-established and belong to the top 10 in the niche. And would Google see this as something that's OK over could this harm ours or his rankings? In order to avoid duplicate content and Google seeing the reviews on our website, because we don't really want Google to see them because they're there only for our users, we are thinking about lazy loading them. Would this be OK? So I guess starting with the last part, lazy loading is fine. It's fine also to have parts of your pages where you say these should be excluded from the snippet. So using the data.no snippet, for example, if you're using reviews that are not sourced yourself, then you shouldn't use them in structured data. That's kind of the other thing. I think the trickier part is really with regards to the link there. And it sounds like you're exchanging things of value there. And from my point of view, that kind of falls into the bucket of link exchange. And that could be something where the Web Spam team would say this would be problematic. My general recommendation there would be if this is on a larger scale, then I would definitely work to make these links no follow so that users can click through if they want. And there's some value in that, at least, but that you're sure that it doesn't come across as we're buying the reviews from them in exchange for back links. OK, so basically, if we are going to give him credit, it should be no follow credit. And maybe one link on a general web page saying that we are using reviews from his website that could be a no follow link. Yeah, I think that would be fine. Yeah. OK, yeah, great. Thank you very much. Sure. Hi, John. Hi. So I have actually a few questions, most of the questions are actually top questions for the main question. So my first question is the main question. If we move a domain to another domain entirely, does it mean that the ranking the old website has, the new website will get all the ranking? It should transfer all of the signals, yeah. If everything is otherwise fine, then all of that should transfer normally. And we've worked for a really long time to make these domain moves as smooth as possible. And I think for the most part, they work really well. Sometimes we do run into weird edge cases, but it should essentially transfer everything. It's trickier if you have one domain that already has existing content and you move something there, because then you're kind of merging two websites. But if you're really just moving from one domain to another one to one, and it's just changing the domain name, essentially, then that's something we should be able to pick up very easily. So here's my next question. We have a client who provide a swimming pool and also build a spa. Now what they did before, they had one site for swimming building swimming pool and another site for spa. Now they want to mark the spa site with the swimming pool site. So all the things they had for the spa site, they want to relate to the swimming pool site. Because they create a category for the spa and all the products is over there. If we do this, the spa site, it has some organ ranking. Does this ranking move to the new product category we're creating on swimming pool site? Yeah. So that sounds like you're merging sites, right? Yeah. Yeah. In general, we do try to figure out what the right approach is with merging sites. But it's harder than when you're moving from one domain to another, because you don't really know what the final outcome should be. So essentially, what happens is on a page by page basis, we try to move things over. And then on a whole, across the whole website, we obviously have to kind of recalculate and rethink things, how the internal linking is, which of these pages are relevant and how they're connected with the rest of the web as well. So that's something where I would expect some of these pages on a page by page basis if you're just moving them to essentially just transfer over. And for a lot of the rest, it will probably take a bit of time to settle down. And I imagine you will see a positive effect of merging these two sites together. But it's something where it's hard to say what the final outcome will be, because it's not all of the traffic from site A, plus all of the traffic from site B, but some kind of mixture in the end. And the last question, this is how it happens opposite of the encoding. So one point, they build guardianship and they build a playground for the kids. Now, the playground for the kids, they want to create a separate domain for the playground section. And all the product they have on the website, they want to relate those products to the new domain. Now, the question is, the old website, the ranking they have for the playground-related keywords, will the new website and new domain have those branding? Maybe. I think splitting sites up is almost harder than merging them. But it's a similar situation, where you can't do it purely on a page by page basis, because of all of the things like internal linking will be very different if you take things out of one website. I think sometimes it makes sense to separate things out. For the most part, I generally recommend concentrating things more rather than separating them out. But sometimes there are really good reasons to split things out across separate domains. Thank you. John, can I ask a question regarding side move? Sure. Could it be the case if you're moving to a domain that previously had associated selling illegal drugs or pornography, could it affect the transfer of the page rank and signals over? Or does it have some sort of algorithmic demotion that's affecting the new site? That can happen. So for example, if the old domain or the domain that you're moving to was an adult content domain, then that's something where it can happen that our safe search algorithms kind of stick to that classification and say, well, this is adult content. And it can take quite a bit of time for that to kind of settle down and get dropped. With regards to, I don't know, other kinds of spam, it really depends a bit. I mean, other kinds of spam. It's not necessarily spam if it's just adult content. But with regards to other kinds of problematic or tricky content, it is something where for a large part we try to replace it with the new version. But sometimes there are effects that are also outside of the website as well, such as maybe there are lots of external links pointing to the site with problematic anchors. That might be something where, on the one hand, we might be ignoring these already, or it could be even that they're causing problems for the website. So if you're moving to a site that has a longer history, then that is something to kind of consider that it might be harder to get it into a neutral state again. Yeah. I mean, I've seen one case that we moved to a spamming domain. Initially, traffic dropped by, I don't know, 75% anytime. But after broad corrupt dates, around six months later, we actually recovered most of the traffic. I don't know what. That's very weird, I think. I think that can happen. In general, with our algorithms, we try to make it so that they adapt over time to things like that. And if we move things over and we have kind of these residual signals for the new domain that you're moving to, then that should settle down over time. And sometimes that takes a couple of months. Sometimes that takes a year or so. And it's, depending on what the domain was doing beforehand, it's easier or it's harder. John, that's very related to my question. So when we migrated our domain, we checked for any links to the, before we even decided to do the migration, we checked for any links to the new domain. We checked for content history. There are only two links. No content up for 10 years, at least according to the Wayback Machine. What other things should we have looked for? What's within our control to check for a new domain so that we don't run this risk? Because that seems to be the indication we're getting, is that that's what's going on with us. So we don't want to run into this in the future. And also, still don't really know what to do right now. Yeah. I don't know. So I saw your question this morning. One of the things on our side is we're still looking into some of the other signals that were kind of missing there. And at the moment, I'm mostly trying to push the team into just getting it resolved as quickly as possible, where I think that first step was good, but it does turn out that there are other things that are kind of stuck there as well. But I don't know what exactly else you should be watching out for. So I think the obvious ones are really the things that you looked at, like the link site, the previous content. Sometimes it's a bit tricky in that really spammy domains try to hide their history in the wayback machine. But I don't think that's the case with the site that you move to. So I don't really know what you could have done differently there. Now, it's a bit frustrating. I wish we could get this resolved a little bit faster. OK, thank you. Sure. Hi, John. I wanted to ask about the C results on Carso in Europe and the eligibility criteria for that. Could you explain how to get featured there? Because our domains are market leaders in some of the countries in Europe. And we feel that we should be eligible for that. But there is no documentation whatsoever. Which Carousel did you mean? The C results on, like the C related results. For example, if you're looking for a mechanic, you would feature like Guntree or OLX. OK, I don't know. If you want, let me just drop my email address here. If you want to maybe send me an email with what exactly you're seeing and what your site is, I can try to find someone and pass that on. All right, that's awesome. Thank you. In general, a lot of these kind of, I don't know, extra links, I don't even know what they're called. Where we link to other sites for more information is something that happens algorithmically. But I don't know how things are handled in that particular case on your side. OK, I'll send you an email. Thank you. Sure. OK, let me go through some of the submitted questions. We have written these questions on top. I think we kind of looked into that already. Let's take a look at the next one. We have a large site and a section on the site. We have a forum. This forum is an old CMS, and it's difficult to optimize for speed. We're looking to make sure that we have good Core Web Vitals score before 2021. If we're unable to improve the speed of this forum, will this only affect the keywords that rank on the forum pages? Or could it affect the ranks of the pages that aren't on the forum and that are faster? Basically, is speed looked at on a page by page basis? Or could slow speed on some pages of your site affect how Google sees your site as a whole? Good question. So in general, with our algorithms, we try to be as fine-grained as possible. So if we can get granular information for your site and recognize the individual parts of your website properly, then we will try to do that. However, it depends a little bit on your site and how much data we have for your site, especially when it comes to speed, where it's based or it will be based because it's not live yet. It'll be based on the Core Web Vitals and the Chrome user experience report data, which is just a very small sample of the people that visit your site aggregated. And that's something that doesn't have data for every URL of a website. So depending on how much data is available there and how easily it is for us to figure out which parts of your site are separate, then that's something that we can do easier or that is a little bit harder. We have similar things, I guess, similar mechanisms across various other signals that we use in search. One of them, for example, is with adult content, where if you have a part of your website with adult content and a part of your website that has normal content or other content on it, then the easier we can recognize that these are separate parts and separate them out individually, then more likely it is that we can just treat that one part slightly differently. And you can do that with things like making sure you have a clean subdirectory structure on your site or using subdomains, if that makes sense for your website. And the easier, in your case, it would be, for example, to split out the forum from our site where we can tell, oh, slash forum is everything forum and it's kind of slow. And everything else that's not in slash forum is really fast. If we can recognize that fairly easily, that's a lot easier. Then we can really say, everything here in slash forum is kind of slow. Everything here is kind of OK. On the other hand, if we have to do this on a per URL basis, where the URL structure is really like we can't tell based on the URL if this is a part of the forum or part of the rest of your site, then we can't really group that into parts of your website. And then we'll kind of be forced to take an aggregate score across your whole site and apply that appropriately. I suspect we'll have a little bit more information on this as we get closer to announcing or kind of closer to the date when we start using Core Web Vitals in Search. But it is something you can look at already a little bit in Search Console. There's a Core Web Vitals report there. And if you drill down to individual issues, you'll also see this URL affects so many similar URLs. And based on that, you can already kind of tell, oh, is Google able to figure out that my forum is grouped together? Or is it not able to figure out that these belong together? OK. And now, like a really long question from David, who doesn't like Pinterest, is the indexing API going to open up to general URLs? I don't know. My guess is it'll be tricky, because we see a lot of abuse with all of the submit to indexing features that we have. And I don't know if it would make sense to open up yet another channel that we have to kind of figure out how to deal with more abuse on. Is Discover going to be in the API? I hope so. I will take this question as a nudge. And ask the team again, can we have Fetch and Render API that has a static IP or a unique user agent? I don't think so, mostly because the Fetch and Render feature or the Inspect URL feature in Search Console is meant to reflect what Googlebot would actually see and is not meant to be used kind of as a single request kind of fine tuning for Googlebot in the sense that you have a specific IP address or a specific user agent there. I don't think that's something we'd be able to do. A suggested feature, Google parser breakers. There's something in the code you can fetch that can break your parser. For example, a few years ago, an image tag in the head would break parsing and therefore terminate the head prematurely. We want to make sure that our terrible HTML isn't causing issues. I guess the easy solution is not to make terrible HTML. It sounds like you're pretty advanced, so you can figure that out. But the part with individual elements in the head that still applies in the sense that when we render a page similar to a browser, if there are elements in the head of the page that belong to the body, then we will open up the body and treat the rest of that section essentially as part of the body. And there are specific elements that we really need to be able to find in the head of the page so that we can take them seriously. That includes things like the rel canonical, the robots meta tags, and hreflang links, for example. So if you have elements on top of the head of the page that essentially break the head in the DOM, that's something that could cause problems for those elements. And I don't think you explicitly see that in InFetch and Render or in InSpecURL, in Search Console, because we just say, well, oh, it looks like the webmaster wanted to start the head of the page here. We will just treat the rest like it is, or treat the rest like it is a part of the body. So we wouldn't necessarily see that as a bug or something that the site is doing wrong, but rather we're just trying to deal with a broken HTML that is out on the web. Extend the API. Some of us are using screen scraping techniques to extract a lot of data that isn't available on the API. Why not make it available and monetize it? So I don't think we'd want to monetize the API. That's the first step there in the sense that everything around money is really hard at Google, especially when dealing with customers and all of that. So I don't think that would be something that we'd monetize. I do think it would be really nice to extend the API, but it's really hard to encourage the product leadership to start doing more on the API side. So if you have any arguments that you think would be really useful in terms of this is how we can make search better if we were able to pull data through the API, that would be really useful for us. So feel free to send that our way, and we'll pass that on to the team. Bring back advanced search operators. I don't really see this happening because it's something that really most sites or most users don't actually use, and that there are really a lot of work to maintain, especially across the kind of ever-changing infrastructure in Google Search. Maintaining some of these features that not a lot of people use is really expensive. Bring back site removals. Pinterest is a parking meter of the internet. No one likes to see them. I don't know, bring back site removals. It sounds like you want to remove other people's sites. I'm sure the SEO of Pinterest would be kind of against that. I don't use Pinterest personally, but I do see a lot of people getting value out of that. And there is a lot of good content there, so I don't know. I think, in general, with regards to site removals, especially on a personal level where you could say, I don't want to see the site anymore, I think that's an interesting idea. I think implementation is just really, really hard with all of the different ways that sites can be visible in Search nowadays. Is there any preferred time zone for the last modification date? Yes. Well, in the last modification date, you should be specifying a time zone. It doesn't matter which time zone you use, but it should have a time zone. I think that's a part of the kind of date time standard that's used in the XML file. How sensitive is Google when it is about to ignore the last modification date? In general, we use it as a guide to understand when pages have changed. And it's not so much that it has to be exact, but rather we have to be able to understand, oh, this page has changed since the last time we looked at it, so we should look at it again. And from that point of view, it's important for us to be able to roughly trust the last modification date. In particular, the thing that we sometimes see is that people generate site net files, and they just use the current date for the last modification date for all URLs. And that's something that's obviously wrong. When we look at the site net file and their 10,000 URLs in there, and they were all updated in the last minute, that's probably you're calculating the last modification date wrong. And from our point of view, it's not so much that we want to penalize the website for doing that. It's just, well, there isn't a lot of signal, a lot of useful information in that sitemap file for us to work on other than, perhaps, there are some new URLs that we haven't seen at all there before. So in a case like that, if you just need to always have the same last modification date or if you can't specify it at all, we can at least pick up the new URLs, but we wouldn't be able to know when the old URLs actually changed. Let's see. Question goes on in, whoops. I think slightly different direction. We update our meta robots frequently, index and noindex. And two months ago, we implemented last modification on product pages, which are back in stock in the last seven days, and mark them as index. But we didn't see any impact on submitted URL, mark noindex. I manually checked some of the last modification URLs. Google never seems to follow them. In general, I think this fluctuation between index and nonindex is something that can throw us off a little bit. Because if we see a page that is noindex for a longer period of time, we will assume that this is kind of like a 404 page and we don't have to crawl at that frequently. So that's something where probably what is happening there is that we see these pages as noindex and we decide not to crawl them as frequently anymore, regardless of what you submit in the site map file. So that's something where kind of fluctuating with the meta noindex is probably counterproductive here if you really want those pages to be indexed every now and then. Hello? Yeah. So last board date is not useful in that case, where we've quickly changed our meta boards. What I would try to do in a case like that is maybe set up a page that you can persistently maintain and link from there to the individual products that you want to kind of have listed or not listed. And then maybe focus more on that persistent page rather than on the individual products that kind of come in and go out. OK, and you mentioned that the type zone. Where do we have to mention that what type zone we are using? So I assume you're talking about the site map file, right? Yes, yes, yes. Yes, OK. So in the site map file for the last modification date, the date time uses a specific standard. And in that standard, it has a time zone attached to the end. If you have a z at the end of the date time, then that means it's, I think, UTC time. But you can also specify different time zones there. OK, so in the end of every last board tag, we have to mention the type zone. Yes. Well, I would make sure that you're doing it in the right standard. So I would check the site map file documentation and look up the date time standard that we use there. It's like RFC and some number. And usually, there is a Wikipedia page with a lot of examples where you see with time zone, without kind of with the UTC or the Z at the end so that you can compare that. Thank you. Sure. OK, in light of the recent indexing issues, any update on the last 45% affected canonical URLs that still hasn't been restored? Is it OK to submit affected URLs manually in Search Console? Can you shed some light on what happened here? What is Google doing to prevent this in the future? I don't have any big updates on this at the moment. I haven't been following up on that. I think Danny has mostly been keeping track of that and pushing things along. One thing I can mention here is you can't use the inspect URL and submit to indexing feature in Search Console, because at the moment, that's kind of in maintenance. So that's kind of tricky there. I don't know, like, for individual sites, how strongly you would still see this, because my understanding is usually these kind of issues are resolved for the visible URLs of pretty much all sites fairly quickly within a day or so. But if you're still seeing a really strong effect from this, maybe drop me a note on Twitter, and I can take a look with the team to see if maybe there's something else that's happening with your site in particular. I was wondering how Google Search was evaluating the ad experience as a ranking factor. It's easy to estimate the readability of a font size, but I have trouble wrapping my mind around how much a huge sticky video on mobile would impact ranking. Let's say 30% to comply better at standard, but still highly disruptive experience. So I had to double check, but as far as I know, we don't use the ad experience report as a ranking factor in Search. But rather, this is something that is specifically in Chrome, where if we see that a site does not comply with the standard and we can confirm that, then Chrome will automatically not display those ads. So it's not something that we would essentially really need to use in Search, because Chrome would handle that anyway. But it is something where probably the effect that you would see in Search would be based on things like the above-the-fold content or the, I forgot what we used, not the intrusive interstitials, but kind of the if you have a lot of ads on top part of your page. That's probably where you would see an effect from Search. I forgot what the name was. We had for that. But it is one of the, I don't know, maybe two, three years old things where we launched something around making sure that the top part of the page is actually content. I don't know how the font readability would fit into that. My feeling is the font size would be more something that would affect whether or not we would see a page as being mobile-friendly. And that's something that would apply for the mobile search results, but not for the general search results. So I don't think I really answered your question there, but I'm not 100% sure what direction I should go there. I wanted to know how we can stop paginated pages from ranking. I want them to index, but not to rank, as I always want the first page to rank and not page two or page three. But I can't even add no index as the listings mentioned in those second and third are important. And if anyone searches for a specific name, I want the listings mentioned on page two and three also to rank. But I don't want other pages rather than page one to rank. So that's, I think, a tricky situation in terms of you can't really specify where you want your pages to rank or whether or not they should be shown in the search results if you want them to be indexed. So if they're indexed, then our systems will try to figure out how to rank them appropriately. So that's kind of you're in that situation where you kind of want both things, both have it indexed but not have it counted kind of thing, and well, not have it indexed and have it counted. But that's not really something that's fully supported there. What we generally recommend when it comes to pagination is to allow your paginated pages to be indexed, block your filtered pages from being indexed, because these are essentially links to the same products as you have before, unless there are individual kind of filtered versions that you would consider something more like a category page. With regards to paginated pages, one way you can make sure that we focus more on the first page is to link incrementally between the pages of your paginated set. So instead of linking from page one to page two, three, four, five, six, seven, and all of the other ones, link from page one to page two, and from page two to page three, page three to page four, so that when we look at that, we see that the first page is like highly mentioned within your website. So clearly, this must be a very important page. And the other ones are incrementally less important because they're further and further away from your home page. And by doing it like that, you can have those pages be indexed, but we will understand that these are not really that relevant for your site. And usually, that does end up with us kind of focusing on the first page, especially if someone is searching for a category, then we can focus on that first page and say, this is clearly the best page for that category. The site map we submitted from years ago keeps updating the submitted date in Search Console. It looks like it's getting resubmitted, but no one on the account has resubmitted it. This happened a couple times a month. What might be happening there? So it is hard to say without knowing which site you're looking at, but one of the things that does happen is that, or can happen, is that there is the anonymous submission feature for site map files where you can just ping a specific URL. And Google will see that as a resubmission of the site map file and try to reprocess it. And usually, that is used by CMSs or site maps plugins, those kind of things, where when you update something, it automatically pings that URL. And let's Google and the other search engines know something changed in the site map file. And that would be treated as a resubmission from our side. And we would probably track that or show that in Search Console as well. So it's not so much that someone is manually going in there and clicking that button, but it could very well be that your CMS is doing that for you and kind of taking some of that work out of your hands and saying, well, we can deal with this automatically. Theoretically, it could also be that someone else is resubmitting your site map file because it doesn't need to be tied to your account to kind of ping that update URL. I imagine it's very unlikely that someone else is randomly submitting, resubmitting your site map file like this because there's really no reason to do that. It doesn't help them. It doesn't really change anything from your side. So from that point of view, my guess is it's probably just your CMS that is doing this for you. Rendering check question, we have a website single page application where when we check the index HTML in the URL inspection tool, it looks fine. Rendering and mobile friendly test looks good as well. But when we test the live URL, the HTML is minimal and the screenshot is blank. What might be the reason for this discrepancy? So my guess just based on this question without looking at the URLs is that we have trouble rendering the page at a high speed. So when it comes to Google search, we're very patient. And we render pages, essentially, if they take a little bit longer, that's fine. If some of the embedded content isn't available yet, we will fetch that individually and then use that for rendering. But with the testing tools, we want to give you an answer as quickly as possible. And when we see someone is using one of these testing tools, we set the timeouts a little bit more aggressively just so that we can give people an answer. And that means that if you're using these tools to look at the screenshot or look at how things are rendered, it's very possible that some of the embedded content, especially if you're pulling things from an API or from your back end, that they're timing out for that specific test. On the one hand, if it works in search, then you're probably fine, because search, like I mentioned, is a little bit more patient. On the other hand, it makes it a little bit harder to debug. So I would try to look into how much embedded content is actually needed for your pages to load. You can look at a waterfall diagram in things like webpagetest.org or in Chrome developer tools directly and make a rough estimation of, is this something that I can improve? Can I maybe fold some of these files together? So instead of 300, 400 requests, it just takes 100 requests to render this page. And if so, then that can make it easier for you to debug. And it does make it more likely that we won't accidentally run into issues when we render for a search as well. And of course, usually it makes things faster for users, too, which is a nice side effect. So my recommendation there would be not to panic if it's working in search, but still to kind of look into ways that you can improve that over time. Is it recommended to use keywords as it is in the content to get the best results in Google? Does a crawler use the fundamentals of AI to make a combination of keywords that are closely related with the content and then rank them? We want to know how a crawler picks up and makes combinations of keywords we ranked on Google. So we don't kind of make up the keywords and say this page will rank for these keywords, but rather we take the queries and we try to find the best matching pages for that. So one kind of the simplest approach that we take is we split a page into words and we store all of the words in an inverted index so that when we know that someone is looking for this combination of words, then we can find all of the documents that have those words in them. And then based on that list of documents, we can kind of reorder that list and say, well, this is the right order that we should be showing these to two people. So it's not so much a matter of Google reading the content and then deciding which keywords to use, but rather we getting the keywords and then deciding which document to actually show in the search results. I think the fundamentals of AI is something that always comes up because we talk about this all the time as well, especially in kind of the warm marketing and user side of search where we use things like BERT to improve the quality of our search results, which is ultimately machine learning, artificial intelligence algorithm. And for a large part, these algorithms are there to try to help deal with cases where we don't really understand what the user is searching for. So we still see about 15% of all searches are completely new every day. And these aren't things that we can prepare for. And we have to kind of estimate what is the user actually searching for? Are there any acronyms in there? Are there synonyms that we can use? Is there like singular plural that we can figure out? And that's something we, for the longest time, we used to do more algorithmically and kind of make hard rules where we say, oh, if there's an S, maybe it's a plural. But that doesn't apply to whole languages and all kinds of words. So for that, we use a lot of machine learning now to really understand better what is it that people are actually looking for instead of just matching individually the individual words in a query. Hey, John. Sorry for interrupting. Go for it. Quick question. Did you have the chance, like just following up on our last discussion two weeks ago, did you have the chance to look into the site that they shared on the chat, the site that had a drop of about 40% and that went recently through a light redesign? I'm not sure if you know who I'm referring to. I don't remember which one. It was a digital trends website. Yeah, I think I passed that on to the team. Yeah, I don't know if I heard anything back there, though. So basically, because we are still working into that. And one question that we have, because we are still, and I think that Ed experienced question that you had on the chat. And I think you were trying to mention the page layout algorithm. Oh, yeah. Yeah, so the page. So how long? You should answer these questions. Yeah, so like, how long does it take to recover from a page layout algorithm if you've been impacted by it? For a lot of these quality algorithms, I don't know if we have a specific time where we can say that it's been resolved now. It depends a little bit on, so I don't know specifically with the page layout algorithm. But with a lot of these, we do look at it more on a broader level for a website, where we say, well, we can't test every page individually for this, but we've seen a large part of the pages for the site have this issue. And in cases like that, it is something where we have to first understand, well, a large part of the site doesn't have this issue anymore. And that means we kind of have to reprocess the larger part of the site, which is something that can easily take a couple of months. So if you're making changes like that and you're essentially affecting the quality of your pages overall, then that is something that I would expect is not going to happen overnight, is not going to happen within a week. It's more a matter of, I don't know, two, three, four, five months, maybe, where we really understand, well, the site has significantly changed. Yeah, yep. So could it be related to having an ad that is about, let's say, 30% and it appears above the fault? Could that trigger the page layout algorithm? We don't have any specific percentages that we would talk about there with the page layout algorithm. I could imagine that if you're talking about something that is really a bigger part of the page, that it could be affecting there. But we don't have specific numbers where we say, oh, the ad has to be maximum this many pixels or this many percent of the viewport. That's not something we have. Yeah, one other question. And this is just trying to find if we miss something. Like, what could be some reasons that would cause a drop of traffic, 35% to 40% across sub-domains, not just the main domain, but across sub-domains as well? Like, that could be, that could cause this. I don't know. Lots of things. It's really hard to say because on the one hand, there are things like our understanding of the quality which can change. And that is something that could affect the larger part of the website. But it's really hard to say kind of in general. Because on the one hand, we make algorithm changes. The rest of the web changes all the time. And all of that kind of plays in together. Even if you don't make any changes on your website, then that can at some point cause either a subtle decline or kind of a really strong drop. Yeah, when you were talking about quality issues, what did cause quality issues if, let's say, for example, we are updating the content frequently but not significantly? Could that be an issue? Like, would Google take some issues into us doing that? I think that's perfectly fine. Yeah, I don't think that that would be generally a problem. Usually what I see with regards to quality issues is really when you look at the website overall and you can tell that users aren't trusting the content as much anymore, or they're just not sure about the content anymore, then that's something where I'd say there are more clear cut quality issues. Just individual updates of a site or making a lot of changes on a site or making few changes on a site. That's usually more kind of the usual noise that is out there on the web. And that's something that some sites make a lot of changes. Some sites don't make a lot of changes. That's all fine. And last question. Yahoo indicates a lot of our content. In the majority of cases, our content ranks first because Google recognizes as the source of the content. But Yahoo doesn't canonicalize its content that is indicating promotion. In some rare cases, they manage to rank above us for some content that is created by us. Could that be a cause for a drop like this? No, I don't think so. I think that's something where, ultimately, the issue that you'll see is that these syndication sites essentially rank as well in the search results. And sometimes they rank above your content because just for whatever reasons. But it's not a sign that we would say your site is lower quality because it's syndicating content to other sites. I think, in general, that would almost be the opposite, that if other sites are wanting to syndicate your content, then that's kind of a good sign. Yeah, we are circling back always to us being affected by the page layout algorithm. And we made changes, and we now understand that it will take some months for those chains to be recognized. But how can we make sure that those are the necessary chains? What else are we missing? Or how can we check to make sure that after two, three months, we don't realize that, oh, we missed this thing, and we should have changed this as well? I don't think you can, unfortunately. I don't think there is a, I mean, there definitely isn't a testing tool that will tell you what specifically you need to change with regards to your website's quality. I would recommend is to get a lot of people's opinions on it. So that could be something where you could go to the Webmaster Help Forum and say, from a quality point of view, what else should we be taking into account? And sometimes you get a lot of feedback that is less relevant, maybe small details. But sometimes you also get some feedback on issues that you've been trying to push out of your mind for a while, and that you actually should focus on a bit more. Thanks a lot. Sure. All right. Maybe I'll just open it up to more questions from any of you. Hi, John. I have a question related to the merging of the websites that we talked about earlier. So we're merging two pretty big websites, not for SEO reasons, for branding purposes. And there are some checklists out there. But I want to ask you, what is the most common error that people do when they're merging big websites? What do they forget? What would you recommend me to really focus on? Yeah. I don't know if we really have a list of the common errors that people make. The one thing I would recommend doing is really tracking all of the things that you're doing, really making sure that you have a clear list of all of the URLs ahead beforehand and where they should be going so that you can check this afterwards as well, that all of the redirects are in place, that all of the internal links are working appropriately, and just making sure that you have all of these technical details really nailed down. And it is something that, especially if you're doing it all at one time, it's kind of nerve-wracking. But I think the best you can do is really just make sure that all of those small details are all covered properly. OK, thank you. And the last question. Developer seems to not want to do a full 301, just a 302. How big of a difference will it make to only do a 302, not a full permanent redirect? So my guess is it will take a little bit longer for the canonicalization to kick in and to focus on that new URL that you have there. But ultimately, I think in the long run, it won't play a big difference. But if you're moving permanently, then it feels like a 301 would be the right thing to do in any case. I'm hoping that you've been given more of a yes, you should do that so I can tell the developers that that was the. Yeah, I mean, it's like it is. It is what we recommend. It's something like if you don't do it and a lot of sites get it wrong, and we have to deal with that as well. But if you really want to make sure that everything is perfect, then a 301 is the right way to go. Thank you. That was all. Hey, John, can I ask you a question? Sure. Yeah, OK. So we have a software application website. And basically, it's a product website. So recently, we decided that we should create a news portal kind of section so that we can cover industry news. And we would be writing like four to five articles every day. Now, the situation is something like that. Because our website is a product website, and the team is confused whether we should create a separate section for news on the existing website, or should we just create a new domain for the new section? Because the confusion is that if we go with the existing website, what are the chances that Google might consider our product website as a news portal only after a certain period of time? Or vice versa, like it's a product website, so Google might consider that it's not a news website. So why to rank their news in the news column? So what do you suggest we should do here? I would recommend keeping it within your existing website. By keeping it within your existing website, you're automatically kind of making sure that it has all of the weight that it can get from the rest of your website. So that's something where if you split it out into a separate domain, that essentially you're starting with a new website, and you have to kind of work to get that well known out on the web. Whereas if you start with your existing site, then you already have kind of something to work with. With regards to Google seeing it as a news site or a product site, in general, that's not something I would worry about, especially if you can structure the URLs in a way that is clear. This is news, and this is like the rest of your site, for example. So if you have a clear URL structure there, I definitely don't see any problem there. Yeah, with existing approach, we decided to create a separate category for the news section. And we would be using news schema so that Google has no doubt. But Google should not be confused, like it's a news website or a product website. That was a confusion. Yeah, thank you for the question. Cool. Now, any other questions? What can I help with? I think we're pretty much on time, so that's also fine. If everything, otherwise, is kind of covered? No. OK. We have news publishers for four years, and every week we do original news coverage, which is linked back by other publishers. But we never appear in the top stories, but they appear. What should we do? It's hard to say, because for the top stories, it's not so much that there is a specific meta tag or something kind of technical that you need to do there. But rather, it's an organic search feature. And depending on how our search algorithms kind of look at that for your site, that can go one way or it can go the other way. So that's really kind of tricky to say. One thing you could do is maybe start a thread in the Webmaster Help Forum so that someone can take a look at your details and see if there's something specific that they can point out. But in many cases, there is nothing specific, kind of like technical, to point out and say, you need to do this specifically to be visible in the top stories section. Oh, OK. Cool. OK. Maybe we can take a break here. I need to jump off and do some other things this time. Thank you all for joining. It's been great having you all here. Thanks for all of your questions. And like I mentioned, if there's something that's still kind of stuck or that you need some help on, feel free to ping me on Twitter and let me know about that. And otherwise, thanks for all your questions. And have a great weekend, everyone. Thanks, John. Cheers. Bye, everyone. Thank you. Bye. Bye. Bye, John. Bye-bye. Thank you. Bye.