 All right. Welcome, everyone, to today's Webmaster Central Office Hours Hangout. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hour Hangouts, where webmasters, publishers, SEOs, anyone can join in and ask questions around SEO. A bunch of stuff was submitted already. But if any of you want to get started with the first question, you're welcome to jump on it now. I have a question if nobody else does. Go for it. I know you hate these questions. And it's been a long time since I asked them. But over the weekend, there was probably more chatter in the SEO industry about a Google update than I've seen since maybe the Penguin or Panda days. I did reach out to certain people at Google to find out, not you, but other people at Google to find out if there's any statement you want to deliver. Are you aware that I reached out? Am I aware that you reached out? Meaning is the discussion internally at Google about maybe responding to this question? I mean, we get questions around potential algorithm updates all the time. So I think that's kind of natural. Right. Anything you could share about this one? I don't have anything explicit. I mean, on the one hand, we make changes all the time. So it's kind of the balance between, well, we make all of these changes all the time. And sometimes some individual changes are more visible or at least more visible by some folks or some sites. So it's kind of the balance between there is like, well, do we need to call this one out explicitly? Or is this kind of just a normal change? And I don't know what the final decision there is if there's something decided on that yet. But it's being discussed internally. Sure. Sure. No. Like it always is? Or this one specifically more than others? But this is anything more than another? I don't know. Not more than others, I'd say. All right, I tried. All right, thank you. I mean, we try to be as transparent as possible about these things. But sometimes there just isn't that much that individual sites can do. So it's kind of hard to say, well, things change. Like they always change. No, it's true. I mean, I did ask Danny last week at the conference to release another Google Penalty because I haven't seen a penalty announced in a while, so hoping maybe. No, I'm just joking. No. It's your fault. Yeah, OK. At least now people know who to blame. Yes. Cool. All right. Other questions? Oh, my God, a really long one in the chat. In Search Console, can I please use a better word rather than valid when it means something very different, like valid and index? I don't quite understand how you mean that. If you want, feel free to jump on in. Hi, John. Can you hear me? Yes. Right, sorry. I keep nagging you about this in various other places. So if you go into GSC and you look in, say, the Enhancements reports or something, if I say, look in my mobile one, have a tiny site. I have 230 pages or something each in kind of desktop AMP and a light mobile site. And it'll go and tell me you have 123 valid pages under mobile usability. Now, there are no errors in any of the other pages. And what it means by valid there is apparently, oh, they're valid and we happen to index them. Now, it's inconsistent between different things. If I look at the new thing that's just come up, the speed one, I get a different number of things marked as valid for mobile under there. So clearly, it can't just be valid because otherwise, the two of them will give the same number. And all over the place, there are things that say valid, which clearly it was like the old one, the crawler stats where it says pages when actually I don't know whether it means pages or objects including other files downloaded. So it'd be really helpful to me. And I've griped about it for a long time. Can we find better words than valid when we don't mean valid? Because the implication is everything else is invalid. And as far as I can tell, that certainly isn't true. Oh, OK. So primarily with the thought that the things that are not valid might be valid. Yeah. Except they're just not missing that report. If they aren't, call me OCD. But I can tell you for sure that I have checked my pages of everything going and have been doing it since 1995. And I'm pretty sure the pages are valid. I'm quite happy you don't index them all. But you should say that rather than imply they're invalid, I think. OK. OK. That was all really. It's just a gripe. No, no, that makes sense. Because I think these aggregate reports can be a bit tricky in that sense in the way that we report over a sample of the URLs. And we'll tell you of this sample, this many are valid. But I have so few. I don't think the sampling issue even applies to me. OK. Well, I mean, it depends a bit on the site. But it's something where, especially in the aggregate reports, we do sample them. So if you add them up, you won't get the same numbers across the different tools. Because we just look at a sample. And it's basically saying of those that we looked at, these are OK. But that doesn't mean the ones that we didn't look at are bad. But I do get that confusion from time to time. So maybe we need to find a different way to frame that. That's all. I think it's some different wording. Different wording is always tricky with translations and all of that. But OK. Cool. Thanks, John. Sure. And then a longer question from Matthew. Do you want to maybe just ask that one directly? Do you have a microphone, maybe? It might be helpful to explain to people how it works. I've never used this interface before. I had to guess a few times to press the microphone thing to present. OK. So there's a big microphone button if you hover your mouse over your picture. Or you can unmute yourself. I can't unmute you. But feel free to jump on in. I'll try to read it out loud and see if I get it right. If not, feel free to correct. We're an e-commerce store. We have many indexed pages, search pages that bring in visitors, and who end up purchasing. A few months now, someone has been building thousands of links to search pages with illegal and sexual anchor text. Google indexes these dynamic search pages, and we rank very high for terms like Vi Viagra online, New Zealand, for several weeks until Google demotes those pages. OK, that seems kind of awkward. If we no index the internal search, we lose converting traffic. On the other hand, manually no indexing these spammy dynamic search pages is too time consuming because there are hundreds indexed every week. What can be done? I don't know what the best approach is here. I think so in general, with e-commerce sites, what we'd recommend is if someone is searching for a page or searching for something that you don't offer within your site, then essentially you're serving an anti-search results page, which ideally you would be returning a no index for. So anytime someone goes to a category that doesn't exist or does an onsite search that doesn't exist, it should have a no index on it or a 404 code so that we know this is not something that should be indexed. So ideally, that would be something that your e-commerce site setup does. Theoretically, you could also do this with JavaScript. I don't know, maybe that's something that could be done with Tag Manager as well, where you can dynamically set the no index. I don't know the details of your setup, so it's really hard to say how exactly you'd be able to do that. The other thing is that generally speaking, we recommend sites don't let their internal search pages be crawled and indexed because it's very easy to have things blow up in the sense that we have suddenly millions of URLs from your website when actually you just have a few hundred just because of all these different variations that people can search and that your server returns valid content for. So that's kind of the other aspect to keep in mind. My general approach to this kind of a problem is to try to find the queries that you do care about and make sure that those are indexable, and then just make sure everything else is not indexed. One way you could do this is instead of having search pages that are there to drive traffic, that you have more category pages where you handpick the categories that you do want to have content shown for and let those category pages be indexed while the search pages themselves would be blocked from crawling and indexing. So that would still allow you to have these pages that are focused on a specific category or a type of thing that people are searching for without opening things up for anything that anyone is searching for that your site might respond to. So that's kind of the direction I would head there. All right, there are a bunch of other questions that were submitted on YouTube. But if anyone wants to kind of get another question in on the side before we dive into that, you're welcome to jump in. Or if not, feel free to jump in in during one of the questions if you have more information or more comments. Oh, we have someone. Hello. Hi. Hey, I actually have a question for you if that's all right. All right, go for it. So I'm working on a Swedish website within the Swedish market. So we are writing content in Swedish, et cetera. Some of the subjects there are not enough information when it comes to like YouTube videos. So to give an example, if we are writing about vitamin D deficit, for example, and we want to post a video about vitamin D from YouTube, are there like if we use an English YouTube video, how would that affect us? I don't think that would have any effect. So I mean, any negative effect in the sense that we would primarily index your web pages and we would focus on the textual content on your pages. We would see that there's a video embedded. So theoretically, what could happen is that video is associated with your landing page with the web page that you created. So if someone is searching for that video in video search, we could theoretically show your page as a landing page for that. That might be kind of awkward. I think if it's the, if there's a language mismatch, but for most videos, we also have good content on the YouTube landing pages. So unless this is a video that is really uniquely available on your site, then it's probable that we would just pick the YouTube landing page for searches for the video and pick your web page for searches around the text that you have on your pages. So it's not that we would say this is confusing and we would treat this as bad. It's more that, well, we don't have this direct connection between the video content and the web page, and that's fine, too. All right, thank you very much. Sure. All right, so let's jump into some of the questions that were submitted. Can single-page websites rank if they cover the topic really well? Or do Google's algorithms automatically block the page from ranking and prefer sites with tons of pages? Single-page websites can rank. They can rank fairly well. There's nothing against single-page websites, per se. The one thing I would caution, though, is that it's a lot easier to build up your website over time if you keep building on your existing website rather than creating new websites for each individual page that you create. So if this is a topic that you already have a website on, I would try to integrate that within your existing website. If you want to create a single landing page for something like an ad campaign, if that's online or offline, then obviously, that's fine. That's something that can work. But in general, I kind of recommend taking a website and building off of that rather than creating individual one-page websites. Being able to have one stable place for your general web presence makes it a lot easier to build up value over time, because people will know, well, this website has created lots of good stuff. And it's a lot easier to kind of grow that as a website rather than say, well, this one page is actually pretty good, because people will be pointing at individual pages on your website, too. But all of those individual pages are within your website, which makes it a little bit easier to rank your website rather than just that one page. And obviously, within a website, everything is kind of linked well together. So you have a clean navigation. You have a way for crawlers to go from one page to the other to kind of explore things from there. So I'd say if you really need to just create one page and put that online, then go for it. That's fine. But if you have a way of integrating this within your existing website, I generally prefer to have fewer websites rather than a lot more. We had a manual action for Pure Spam because of how our domain was used before we purchased it. We purged all the backlinks, fixed any issues, sent two review requests based two weeks apart over the past month. But we're still waiting for a response. How can we follow up and have the manual action checked? It feels like it's never going to be resolved. So waiting for the reconciliation request to come back is really the right approach there. There's no way to kind of bump yourself up to the top of the queue. Sometimes things get a little bit packed in the review queue. And essentially, you just need to wait until all of those are resolved. And then they'll get to your site eventually. Resubmitting a site with another reconsideration request won't push it higher and it won't push it lower. It'll essentially keep that initial reconsideration request in the queue. And it'll get processed when it comes. With regards to Pure Spam, that's something that wouldn't be related to backlinks that would purely be related to the actual content. One of the things that the reconsideration team does look out for though is that there's actually a website live on that website there. So if you go and just delete your whole website and then do a reconsideration request for a Pure Spam issue, then the reconsideration team will look at it and say, well, there's nothing here that we would rank. Therefore, it doesn't really make sense for us to process this reconsideration request because there's nothing here now. If we remove the manual action, there still won't be anything here. So there's nothing big to change there. So if you're just starting or if you're cleaning up a Pure Spam manual action, make sure that you have normal web content on the website so that the reconsideration team can look at it and say, well, there's good stuff here. We need to make sure that it's indexable. Another thing maybe to mention with especially with the Pure Spam manual action is that it removes things completely from indexing as well. So if the Pure Spam manual action has been in place for a while, then our indexing system will have to start get rolling again for your website, which sometimes takes a few weeks as well. So when you get the reconsideration request back and it says, all is clear, you're ready to go, that doesn't mean your website will suddenly pop back into search results. That means our systems can then start working on it. And maybe a couple of days later, maybe a week or so later, that content will start to appear in Search Console and in our indexing system. That sometimes just takes a little bit longer to kind of get back into play again compared to some of the other manual actions. Or for example, if a website was hacked temporarily and got the Pure Spam manual action for a really short time, then things would still be in our index more or less, and we can just reactivate that. But if it's been out for a longer time, it takes a little bit longer. Thanks for the clarification, John. Sure. That was edgy my question. And thank you very much for following up with the last point on this. It's going to take a little while before things start to go back into the index. Given that we do have some pretty awful backlinks still in there, and our manual action was actually approved today, which we can definitely celebrate about, is there anything you'd recommend that we should look at additionally for improving indexing, given that there are crappy backlinks out there and references? I would double check what kind of bad backlinks they are. If it's something where the previous webmaster really did some shady link building, then that's something that you might just want to disavow on a domain basis. Maybe take the biggest ones or the ones that look the worst to you and just disavow those. But in general, that's something where purely from an indexing point of view that wouldn't be playing a role there. OK, brilliant. Thanks a lot for the clarity. And then in that case, I guess, just a waiting game will wait a couple of days a week or so, and then hopefully see some results. Yeah, yeah. What you'll probably see is that the crawling side takes a little bit to start rolling again. And then after things have been crawled, then the indexing side can happen again. So if you're watching the server logs, if you have access, then you should see at some point while crawling is starting. And then you know it's not going to be too much longer until indexing is back. Well, thank you very much. Sure. A year ago, we disavowed a long list of links. We thought we're hurting our site. Within that list, there was a list of about 15 to 20 links. We are not so sure about and disavowed them anyway. After reviewing our disavow file, we'd like to check if those links help our site or not. Should we remove them from the disavow file all at once, one by one every couple of weeks? Or is this not a good approach at all? How should we go about it? So from my point of view, if you're unsure about certain links in your backlink profile and you know these were not created in a bad way, then I don't think you need to disavow them. I'd really only try to disavow things where you're really sure that actually there's something bad behind that that you aren't able to clean up. Or if you're in a situation where you're like, oh, losing sleep because you don't know what Google systems are doing and you just want to make sure that it doesn't misinterpret something, then, of course, this disavow is probably not necessary, but it's good for peace of mind. So from my point of view, if you don't want to disavow those anymore, I would just remove them from the file and move on. My guess is with 15 to 20 links that you're not sure are really terrible, my guess is those wouldn't play a big role in your site's ranking. So it's probably not something where you'd see a jump in traffic suddenly because those links are countered again, but rather maybe a subtle change over the long run. But if you're unsure about these and you're talking about a handful, 20 links, something like that, then I wouldn't worry about it. I would just remove them from the disavow file and let them be processed normally. Hi, John. This is actually my question, and I wanted to follow up on your answer. So if those links were actually spammy links and we removed them from the disavow file all at once, will that affect badly on our website? Essentially, we would try to process those links when we try to recrawl those pages, and we would just treat them as normal links to your site. And our algorithms do have some protections in there where when we recognize that something is spammy, we try to ignore it. So that's something, ideally, they would help your site a little bit. In the worst case, they would probably just be ignored. OK, I understand. Great, thanks. Sure. Due to a technical problem, we had a period of 7 to 8 weeks, a duplicate content issue. Pages from our main site were one-to-one served on our verticals without a canonical description. We fixed the issue recently, and on the wrong pages, the vertical returned 404s. Do we have to expect a penalty for this? Or are we already seeing the results of such a penalty? So maybe the first one, before we move on to the other questions, no. This would not result in a penalty. This would not result in a manual action from the web spam team. It's also something that algorithmically, when our algorithms look at your site, they're not going to say, this is a spammy site, just because you have some duplicate content on a kind of like where you're mirroring things on a per page basis. This kind of duplicate content is very common. A lot of sites, for example, have the same content on the dub-dub-dub version, as well as on the non-dub-dub-dub version. Technically, that's duplicate content. In practice, that's kind of the way the internet works. It's no big deal. So definitely, from a manual action point of view, there is no manual action for duplicate content. The only time a manual action would come into play with duplicate content is if your website is essentially scraping a bunch of other websites. And it's not so much a matter that you have some duplicate content across your website, but rather your whole website is duplicated from other websites. There is nothing kind of valuable for us to index on a website like that. So that would be a place where the web spam team would say, maybe we need to take a manual action. But this kind of thing, where individual pages are just mirrored, no problem. I mean, technically, it's good to clean that up. It improves indexing a little bit, makes it easier for us to focus on the right pages. But it's not that you're going to see a big negative drop in search because of that. The verticals in question show over 2 million pages indexed in Search Console. Is it a good idea to speed up the 404 recognition process with an upload of a site map for the verticals, including explicitly the wrong URLs? Theoretically, you can do that. In practice, I don't think that would make a big difference. So what is probably happening in a case like this is we would index these pages individually. And when it comes to showing them in Search, we would say, well, we have a few copies of the content. We will pick one of these pages and just show that. So we're kind of already folding them together when it comes to the search results. So it's not that you need to remove these from Search immediately. With regards to kind of having them be processed a little bit faster, having the graphs go back to kind of a more normal situation a little bit faster, that's something where you could do something like this, maybe a site map file of pages that are now 404. Or if you have redirects in place, they link to those redirects from the site map file, that could help there. I think in this particular case, if you're not worried about showing something wrong in the search results, I wouldn't necessarily worry about it. So you can probably trigger these pages to show in Search if you do a very tricky site query. But for normal queries, we would probably pick your primary pages. Hi, John. Hi. So John, actually regarding site map, maximal site map, I was thinking something in my mind. So how would Google behave if I have a list of the 404 pages in XML site map? So one page it goes and crawls, finds 404. Does it mean that Google is not interested to further go to other pages because they all are 404 or for them? No, no. Just because you link to 404 pages in the site map file doesn't mean the site map file is bad. So we would still use the site map file. The important part is the last modification date there, which tells us that something has changed on these pages. We would still try to prioritize crawling of those URLs. So in a case like this, where you have site map files linking at 404 pages, we would just try to crawl those. And at some point, we would say, well, it returned a 404. We don't need to index this page anymore. And that's fine. I suspect we would show this in Search Console as well and say, you submitted pages that returned 404. But if you're doing it on purpose, you know that that's going to happen. So it's not going to cause any problems otherwise. All right. Considering that Google is still at its early stages to understand natural language written for non-English content, do you think it's a good idea to use more structured data on foreign language sites? This is, of course, to make it easier for the indexer to understand the content of the page quickly. That's a unique question. I don't think I've seen that before. In general, we do try to understand the language on a page a little bit better. Sometimes it works better in some languages, and sometimes it works really terribly in other languages. But it is something that we work on. With regards to using structured data to kind of explain the entities on a page, I think that's generally fine, and it definitely wouldn't have any negative effects. I suspect for the languages where we have trouble understanding the language, it wouldn't play that much of a role, though. Because if we can't understand the entities within the queries, then we would have trouble mapping that to content on your pages. And in those cases, we would fall back to the situation where we understand that these are sentences or that these are words, and these words are on that page, and that page makes sense to rank for these kind of queries with those words. So we would try to do it more on a per-word basis. So I think giving us more structured data on pages is generally a good practice anyway. But you're probably not going to see a big jump in kind of relevant search results for your site by implementing structured data in a language that is hard for us to properly understand. Hi, John. Can you hear me? Hi, John. I have a quick follow-up question. Sure. Wow. One at a time. Xavier, go ahead. Yeah, thank you. I just wanted to jump on this Twitter that I think. Because as you may know, we can't pass everything we want in a company or with clients to make them implement structured data and other stuff because they've got priority with IT, et cetera. As of today, the official communication was implemented. It would be better for Google. But at some point, we will have, someday in the future, an official announcement saying that Google is taking into account those structured data to enhance, in a way, the ranking. Because as of now, we don't have enough arguments to tell people, yeah, you need to implement this because reasons. Because maybe Google will use it. And you see where I'm going? No. We want to put that kind of stuff to our queue. But you need to have a little bit of this one. Thank you. Yeah, I think that's tricky. So the clear thing at the moment is some of the search results features, the way that we show things, they require certain structured data. And I think for those, the argument is a little bit clearer in that if you want your search results to be shown in this fancy way, then you need to give us the information that we can show there. So that's, I think, a little bit clear. It doesn't provide any ranking boost, but at least it's a visible change with the search results. And sometimes having a page being more visible in the search results is useful just the same as ranking a little bit better. With regards to using structured data in general for ranking, I think that's kind of tricky. So on the one hand, we do use structured data to better understand the entities on a page and to find out where that page is more relevant. But that doesn't mean that just because people are doing things in a technically correct way on a website, that that page is a better page than it would be otherwise. So we would try to use that to show it in more relevant search results that would perhaps bring more users to your pages that actually match the topics of your pages. But it doesn't mean that we would show it to more users or that it would rank better. So that's something where I don't really see that changing in the future. That's something that kind of goes back to, I don't know, way in the beginning where people would ask, does valid HTML, is valid HTML a ranking factor? Because clearly a page that has valid HTML, they spent more time on that page. Where from our point of view, well, they might have spent more time doing good things technically correct, but does that mean that the page is actually a better search result? Is that really something that provides more value to the user, or is it just that someone was a little bit smarter in creating that page? And the actual content that people would read is not actually that much better. So that's kind of the tricky balance there. And I don't see that changing in the sense that we would suddenly say any page with structured data will have a ranking boost because structured data isn't visible. So even that would be something we would be ranking things higher for technical reasons that users would not even be able to appreciate. OK, thank you. Hey, John, a quick follow-up question there. Do you see in the near-medium future Google moving away from structured data because it's better able to understand natural language? OK. To some part, I could see it easier for us to extract the entities of a page over time. We already do that. It's something that I think is probably still in early stages. But there are lots of things where we still need to kind of rely on input from webmasters to be able to highlight that properly. So things like the rich search results where individual elements of a page need to be extracted. And we need to be able to understand those properly and to say, well, this is like a review count. And this is the maximum star rating. And they have four out of five. And so many people reviewed something. All of these kind of details are pretty tricky to extract automatically, even if you have a really good system. So you see some of that with the data highlighter tool, where we do try to extract things based on patterns on your website. And that's something where you can try that out with the data highlighter in Search Console to see how well things can be followed, even if you clearly define which piece of element is which part on the page. So I think part of that will definitely remain and certainly in the near future. So if anything, when talking with the engineers, they come up with more ways to use structured data rather than fewer. The one thing that I have seen, which I thought was pretty cool, is that some of the engineers have been looking into finding ways to recognize when a site could be using structured data to highlight things in a way that we could understand it a little bit better. So for example, if we can recognize that there are events on your pages, even if we can't extract those events properly ourselves, we could theoretically send you an email and Search Console and say, hey, it looks like you have events on your website. If you want those events to be shown within the events features in Search, maybe structured data would be an idea to help out there. So that's probably more the near future and the medium future. I don't see it going away there in the really long-term future. I have no idea how that will evolve. Got it. Thank you. That's helpful. All right. Is there anything you can advise as crucial for single-page application sites? Which 30x code should we use to not lose anything from a link's equity? And how long should we keep that redirection? So I guess those are two different questions. Single-page applications, we have tons of documentation on that. So these are essentially JavaScript-based sites where you have one HTML file. And by JavaScript, you're essentially pulling in different content from the server and creating a different site under individual URLs. I think the main thing I would focus on there is to make sure that you have clear indexable URLs. It seems to be the main thing that people run into when they start looking into this problem. And apart from that, for JavaScript rendering, we have a ton of information in our developer documentation. So I would recommend going there. In particular, Martin has put together a bunch of videos as well on this topic. So lots of information on how to make single-page application sites work well in search. I think it's still one of those areas that is kind of bleeding edge. So if you're unsure about all of this, if you're not that technically versed with JavaScript and all of these things, then it might be something to kind of try to get some help for from people who have had a little bit of experience here, just to make sure that you're not running into any pitfalls when you're creating things. Once you're sure that your setup works well in search, then that's something where it's a little bit easier to just say, well, it's working. My content is being indexed. I can focus on the content and not so much on the technical details. Which 30x code should we use to not lose anything from a link's equity? So we try to differentiate between two redirects, the permanent and the temporary ones. Permanent redirects really tell us that everything should be focused on the destination page, which means that we forward all of the signals to the destination page. The temporary one tells us we shouldn't trust this redirect to remain there forever. It's temporary, so we should focus on the source page instead. And then all of the link signals are focused on the source page. It's not that you're losing anything. It's just like, do you want this page or do you want this page to have those signals? That's kind of the decision that we need to make there. So use the right redirect type for the type of issue that you have. Do link clicks from Google My Business Pages count as organic links or direct clicks? This sounds a bit like an analytics question I don't really know from an analytic side. Within search. It's organic. OK, it's organic. So if we're showing kind of that map, what is it, the local results one box thing with a link to your website there, then that would count as an impression. And if someone clicks on the link to your website, that would count as a click in search console. So that would be fine. Within the Maps UI, if you're in Maps and you search and someone clicks on your website, we would not count that in search console. What you could do, but you can add UTMs for Google Analytics to track it as Google My Business Traffic. Yeah, you can add the UTM parameters there. And I suspect you can see that in Google Analytics, it's not guaranteed that you would see that in search console. The main reason there is that in search console, we would try to show the canonical page. And the canonical page is probably the one without the parameters. So it's kind of hit and miss in talking with the Google My Business team here. They're not so sure which direction they want to take there, which standpoint they think is the right one. So I'm not sure how that will remain in search console. In Analytics, I suspect you'll still be able to see that. And then I think the question is also with regards to Google Ads, would those count as clicks and impressions as well? Those would not count in search console. So in Analytics, probably, in search console, they wouldn't count because they're not a part of the organic search results. How important is mobile page speed as a ranking factor? So we did a bunch of stuff and we improved our score. So should we be ranking higher, essentially? We don't have a measure for the importance of individual ranking factors. So that's kind of the main thing here with regards to kind of the meta question of, is it even worthwhile for us to work on speed? If we can't quantify exactly what the ranking effect is, I think it makes sense to focus on speed, regardless of any ranking factor. The main reason being that when people come to your site and your site is really slow, they're going to leave again. So if you care about the people that make it to your site, then make sure that they have a good experience on your site. And usually, speed plays a big role in that. And there are different ways of measuring speed. There are different metrics, a whole bunch of different three-letter acronyms, which I can't remember. Martin knows most of them. But there are so many different variations, especially with regards to the different page types. So if there's something where you want people to enter something, then maybe you'd look at something different than if you want people just to see your content. So lots of different metrics out there. I would try to find the metrics that are important for you, for your site, for your pages. I would also use the different testing tools to find the low-hanging fruit that you can improve fairly quickly. Pretty much every site has some low-hanging fruit that you can improve on, which make it easier for you to improve your score across the board. With regards to the individual testing tools, like Lighthouse or PageSpeed Insights, the score there is mostly a representation of we look at different factors and we came up with the score. It doesn't mean that this is like a target where you should aim to be 100 out of 100 all the time, or that you'll have a clear ranking bonus if you have everything set up. When it comes to ranking, we look at a mix of metrics. On the one hand, kind of these theoretical metrics that come out of the testing tools, but also at field data from things like the Chrome User Experience Report, where we see what people are actually seeing when they come to your website. So it's not that you need to gain the PageSpeed Insights score and get 100 out of 100 there, but rather, essentially, if you wanted to gain the score, you would do it by making a really fast website, which is kind of what we're trying to encourage people to do. So kind of summarizing all of that, there's no specific kind of weight of individual ranking factors that we can give. So I can't tell you how important this is. It does make sense to focus on even regardless of the ranking elements, and we do use it in ranking. So it is something where we do have an effect in the ranking side, specifically around speaking. How often do you check the overall quality of content on a website as a whole? Is it monthly, or more often than not? Or is it only when the core updates happen? We essentially have different algorithms that are running all the time. And they look at all kinds of signals that we can collect about your content, and they try to come up with kind of a notion of quality and a notion of relevance for individual queries for the website on a running basis. So it's not that there is a one-time run where we go across all websites on the web and look at all of the content and evaluate the quality. I don't think that would be feasible. On the one hand, we can't look at a whole website at one time. We kind of have to crawl individually and spread that load out over a longer period of time, kind of just purely from a technical point of view. And with regards to the core updates, these are essentially just different ways of us evaluating the quality of a website. So it's not that we would go and look at the website again then, but rather like we have collected all of these signals about this website, about how we think we should treat it. And then the core algorithm updates, they essentially say, well, we use slightly different numbers when calculating the total relevance for individual queries. So it's not that we reevaluate the website then, but rather that we recalculate the scores. We built an SEO optimized site with an SSL certificate and a one-second loading time. But it ranks below sites that have no SSL, bad user experience, and take a long time to load. What tips do you offer to rank above them? So these are great things. These are really good technical things to implement. I think having HTTPS is a great thing to do, especially nowadays that's kind of expected. One-second loading time is fantastic. So that's really awesome. I think, in general, you're kind of on the right path there. In practice, that doesn't mean that your site is suddenly more relevant than other sites out there. So I would still kind of take this as a good foundation to build on and then work to make sure that your whole website is really fantastic. And ideally, that it's really significantly better than the others that are out there so that it's clear to our algorithms that we should show your website above the others. And that's something that's not just based on technical details. So where you really need to make sure that the content is really significantly better. So if you were, for example, just to take the content from another website and host it on HTTPS with a really fast server, then that doesn't mean that content is suddenly better, because it's essentially the same as the other website already has. So what big difference does it make there? So really kind of take a step back. The technical details are fantastic. But you need to kind of deliver the actual content as well. Is the Google Sandbox real? What's the impact on a new website? So no, there is no Google Sandbox. I think we've said this tons of times. But these questions come up all the time. It's something where we don't have this kind of notion that a new website should be blocked from being shown in search until it has reached, I don't know, a maturity of half a year or a year or something like that. We don't have that kind of a notion built into our algorithms. However, there are certain things that could look like that. So in particular, if you're a completely new website, then we don't have a lot of signals for your website, for the content that you have there. So essentially what happens is our algorithms say, well, we don't really know that much about this website, so we're going to guess. And in the beginning, that guess might be positive. That's something that some folks externally have kind of framed as a honeymoon period where Google really likes new websites. It could also be that that initial guess is a little bit too low, where we say, well, we're not sure about this, so we'll put it kind of in the mid-range. And there's some really good stuff here. So that mid-range versus the really good stuff is going to be hard competition. So you might be ranking a little bit lower than you would in the long run. And this is the kind of thing where it's not that our algorithms are saying we should, by default, block this website from showing up, but more that we just don't know how we should be showing this website, ideally, in search. So we start off with some assumptions, and we kind of see where that settles down over time. So that's probably the effects that you would see there, where some sites, when they start off, they start off really well, because we think, well, this looks really good. And for other sites, we don't really know where to place them either. And maybe that starting point is a little bit lower than it would be when it settles down for the long run. And obviously, the settling down part is not static either. Ideally, you would create a website, and you would grow your web presence. You would promote it online. You would bring people to your website. People would say, oh, this is fantastic. I've been waiting forever for this. And they would kind of link to it from other sites as well and promote it for you a little bit too. So that's something that's kind of very dynamic as well. It's not that you put a website up once, and then you never touch it again, and nothing on the rest of the web changes for a really long time. The whole web is dynamic. So continuing working on a website to make it better over time is definitely what I would aim for there. OK, I think we're running low on time. So I thought I'd kind of open it up for you all to see if there's anything on your mind that we should be focusing on. Looks like a bunch of back and forth in a chat. Yes. Hi, John. Hi. I have one question, actually, about m.site and www.website. So I know that Google recommends for a response. But if the website is already running, in this case, we were just going with m.website. So in this case, John, actually, if I want to go with A-B testing for www, for www, redirection to new URL, and then canonical to old one I have just gone through. But how should I manage m.website, especially when it is mobile first index, and I have to provide the equivalent version or equivalent behavior in both devices? Or suppose not m.but amp also is available. Yeah. So I think you're making it really hard. With this kind of a setup, if you have desktop, m.and AMP, and you're doing A-B testing across different URLs, you're creating a whole bunch of URLs for essentially one piece of content. And that makes it so much harder for us to process properly. So in practice, you would work with the canonicals to focus both for the kind of the m.ab testing versions. You would set the canonical from the m.b version to your main version, assuming that the A version is the main version. And the same thing on your desktop, you would take the desktop b version and canonical to the desktop main version so that we can understand that connection a little bit better. You have the same setup with the AMP pages, where you have the AMP page, the canonical set, to the desktop version or to the mobile version. But I think with all of this extra complexity of different canonicals and different URLs with the same content, it's something that you have to assume will cause issues in the sense that our systems might be confused with regards to which URL they need to actually index. So if instead of just one URL with a responsive site, we suddenly have six or more URLs that all have the same content and we have some canonicals and some cross-linking there, then it can easily happen that our systems pick a wrong URL for indexing. In practice, what that means is we'll index the content on that URL. We'll pick that as the canonical and show that in the search results. If the AB testing is minimal, then probably that's no big deal. If the AB testing is significantly different content than we would index, that's significantly different content and that could be reflected in your ranking. With regards to different device types, we do recommend using redirects between the device version. So if a desktop user goes to the mobile version, they would be redirected to the desktop version. So that's less of an issue. They would still get to the right version. However, if you're tracking things in Search Console, or if you're tracking things in Analytics and you're monitoring it by URL, then I would expect to see some amount of confusion across those different versions. That's primarily also the reason why we suggest moving away from an MDOT version to just the purely responsive version, because the less confusing you give us, the more likely we'll do what you ask of us. So if we just have one URL to index for that piece of content, then we'll say, sure, we'll index it under that one URL, because that's all we know. Whereas if you give us lots of different URLs for the same content with some connections with rel canonicals and maybe redirects in place, then we'll try to figure it out for you. But it's not sure that we'll figure it out in the way that you actually want or in the way that is useful for you when you're tracking metrics. John, actually, this was already old website, and suddenly we were not able to just move it to responsive. This is why MDOT was still going on. And John, it means that if responsive is then for all the devices, the A-V testing will be going on. In the same way with MDOT website, how Google will behave if I only want to do A-V testing for desktop? What that is? I think with an MDOT version on mobile first indexing, we would index only the mobile version. So we would not even see this A-V testing on desktop. So that's something that makes it a little bit easier in that you can do the A-V testing purely from a user point of view on desktop. But we would not see that for indexing. All right, all right. Thanks. Someone else, I think, had a question. But I forgot who that was. And how many questions? Oh my gosh. OK, let me see if I can grab a quick one from YouTube. Question about verifying status codes. In the answer, you said if it's a 400 or 500 error or redirect, then obviously those are things that we wouldn't render. We're planning a website switch that will include some changes of URLs, which we're currently ranking. We're redirecting those URLs to new ones, helping new ones get discovered. Or we'll Google CD 301 status code and ignore them. So I think that might have been unclear or obviously unclear if the question comes up in the previous hangout. Essentially, that was specific to rendering the page. So if there is JavaScript content on the page and that page itself is returning a redirect or an error code, then we would not render the JavaScript content. You could imagine that you have a user-friendly 404 page and it uses JavaScript to show some content, including some links to other pages. Then we would not use time to render that error page because we already know, while it's a 404, any of the content that we find here would not be indexable anyway. So we would not render the page in that case. When it comes to redirects, obviously we wouldn't render those pages either, but we would follow the redirect. We don't need to render the JavaScript. We just follow the redirect to the new location. So definitely if you're changing URLs from old URLs to new URLs, then definitely a 301 redirect is the right approach here. And it doesn't matter if we render the redirecting page or not because even in the browser, that move will be fairly quickly. And you would immediately go to the next step in the redirect. You wouldn't even try to render the HTML that's returned with the redirecting page. I don't think most browsers would even show that. So from that point of view, 301 redirect is definitely the right approach. John, I have a question about the in this video or feature. I don't know if you know anything about that. But just to be clear, if you have YouTube videos and you're putting the timestamps in the description, you don't need to apply for that beta form or that schema. The schema is only if you have your own videos hosted on your site. I don't know. Good question. Is that unclear in the documentation? I think it was clear that you don't need to submit the form if you have YouTube videos. Somebody asked me and they think it was not. So I just wanted to clarify that with you. I'll tweet at you and see if you can look into that. I think the best approach would be just to prove it. I don't want to prove it because I do both. And now it's like, all right, which ones are pulling from it? It doesn't even pull them anyway from my YouTube videos. I have like 50 timestamps. And I don't think we will make it. So the next question is, how many timestamps are too many timestamps? I don't know. And since it's features in beta, when is it going to be released? And then if we do it, do we get our DA ranking boost? So let's spit that out. Yeah, I think that might happen. I don't know. You don't know. OK. It's possible, maybe, definitely. It depends. Got it. It depends. It depends. Yes. All right. Perfect ending. OK. So we're at time. Thank you all for joining in. Thanks for all of the questions, all of the discussions here as well. I hope you found this useful. And maybe we'll see each other again in one of the future Hangouts. Wishing you all a great week. Bye, everyone. Bye, John. Thank you.