 All right, welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a webmaster trends analyst here at Google in Switzerland. And part of what we do is these Webmaster Office Hours Hangouts, where webmasters can come and ask any questions around search or Google-related, webmaster-related questions that they might have. There are a bunch of questions that were already submitted. But as always, if any of you want to go ahead and grab the first question and take it, feel free to jump on in now. Hey, good morning, John. If it's OK, I'll ask a question for you. Fantastic. Been a while since I've been on a Hangout. But we've been investigating. We seem to have a drop in traffic pretty significant from Google that seems to coincide with what people, I guess, call the Fred update. So in trying to determine what's going on, we're obviously researching different queries and things. And we found a lot of instances that sites that stole our content will outrank us on long tail quotes off of our pages, including some that where we don't show up unless you hit view omitted results. And we think this could indicate some sort of a problem that we're trying to figure out what that may be. And we did see similar things like this a couple of years ago when we had some issues that we still don't quite understand that we recovered from. So we're trying to figure out what direction to look to try to figure out what might cause that symptom. That's hard to say. I mean, that could be pretty much everything, with regards to the Fred updates. That's essentially a name that was given from XTurtle and not something where we'd say this is one specific update. This is basically just the general updates that we do. Someone decided to give them a name. And from our point of view, they're just normal search quality updates that we always make. So it's not that we would significantly change our criteria or look at something completely different. It's more that these are kind of the normal quality type updates that we make to kind of improve the quality of the search results. So my general feeling, if you're seeing changes in your site's visibility in search during this time, then that's something where probably some of our quality algorithms have been re-evaluating your site and thinking about where does this site fit in best? And they might be assuming that from the quality point of view, it's not as good as they assumed it was before. So obviously, that's tricky to turn around and say, this is something specific that I need to work on or I could work on. But in general, that's kind of where I would look as a first thing. Obviously, first, make sure that there are no technical issues. There is nothing technical kind of blocking. Search console should give you some information on that. We don't seem to see any technical things. And then as far as, I mean, the most concerning thing is having the sites outranking us that are doing it with exact copies of our own content that we're worried could indicate. We're worried could indicate some kind of issue that we don't really understand at all. So that seems kind of weird. That's something where I'd love to have some examples if you have something like that. And I can take a look at that with the team here. And I appreciate it. I'll pop you a post on Google+, if you don't mind. Yeah, I think probably worth looking at that in detail. But sometimes that can also happen if you're searching for a really long piece of text. Then we might say, well, we have this text here and we have this text here. And it's kind of flip a coin whether or not we showed this one or the other one. But for more generic terms where someone is really copying a lot of content from your website and the ranking for those generic terms instead of you, then that seems like something we should probably look at. OK, thank you. Sure. Can I jump over the couple of questions, John, please? All right. OK, so I was at a seminar the other day. And they said, if you had a canonical and a no index on that page, that the canonical passed the no index to the canonicalized page. Is that the case? Did you understand that? That's, I don't know. It's something where we've kind of discussed this like a long time ago internally, at least, with regards to what we should be doing in a case like that. Because with the canonical, you're saying these two pages should be treated the same. And with the no index, you're saying this page here should be removed from search. So our algorithms theoretically could get confused and say, these should be the same, and this one should be removed. So we should remove both of them, right? Right. Or they could look at it differently and say, well, these should be the same, and this should be indexed. Therefore, maybe, I don't know, what should we do with the no index here? But in practice, what we've kind of come to is just saying, well, probably the real canonical is a mistake here. Probably you're just using the no index as a way to kind of pick a canonical or force a canonical. And from our point of view, we'll try to follow it like that. OK, that makes sense. And my other question was around search analytics on the master tools. So we like to split out brand and non-brand. And we'll do this by the filters. But you've probably had this question before, but the numbers don't match up. So if we've filtered out brand versus non-brand, and we take those numbers, those never seem to add up to the total. Impressions and clicks, could you just tell me why that is, and is there a work around the total? That's probably when you're looking at things where there are parts included that we filter out. So specifically, in search analytics, we filter out queries that see very few impressions, just to kind of make sure that there's kind of like a privacy threshold there. You can see the full numbers if you look at it on a per-page basis. So that might be an option. The other option might just be to say, OK, there is always going to be this difference. And we just have to live with that. I think it's the case of if reporting does not contain this brand. And then obviously, we'll assume that the other number is going to be different. Well, the difference between that number and the total, and it doesn't necessarily add up either. Yeah. You sometimes also see that for smaller sites, where when you look at search analytics, the query view, it'll say on top, I don't know, 1,000 impressions. And in the table, if you added up there, 700 impressions. You're like, what's happening here? Let's Google that out. And that's really just from filter queries, where you say, well, these are individual queries that happened that we don't want to pull out separately. OK. So you're saying put it on a page by page basis, and that would give us a more accurate number. But then would we not be able to filter on queries then? Yeah. As soon as you add queries to that, then you'd have that filtering happening. So on a per-page basis, obviously separating brand versus non-brand is not going to be trivial. For some sites, it might be that the home page is kind of like the brand page. And the detail pages are more kind of the non-brand pages. But that really depends a lot on the website. OK. Thank you. All right. A question here in the chat. Do Google reward some page rank initially when a new page is created? And is page rank flow when you link out? So I guess that's a tricky question, because I guess the biggest issue here is, of course, page rank is something that we do use, but we use this internally. It's not something that is exposed externally. It's not something that webmasters see in, for the most part, it's not worth obsessing over. I think what you're probably seeing here is that when we see a completely new page, on a website that we already know about, we have to find a way to kind of understand where this page fits in. And for a website that we know is generally of high quality and has really good content and is really relevant for users, we might say, OK, this completely new page that we don't know anything about is probably similarly OK. So we might get might kind of treated well in the search results. And for other sites where we know the whole website has been really spammy and really low quality, if someone adds a completely new page there, we're just going to say, well, I don't know what to do with this. Maybe we'll kind of be a bit more cautious here with regards to how we rank it initially. So that's probably the effects that you're seeing there when it comes to completely new pages. Hello. All right. I would like to raise one question. Sure. Yeah. So when we build a new page or post a very new post, and then there is a social sharing about the post, after that, there are the viral comments, viral, the post goes viral, then it gets a lot of attention from social shares and comments. Does Google believe that? Hello. Hi, everybody. Can I just a second? Yeah, go ahead. So does Google believe that it is a genuine thing and push it upwards in the ranking? Or is it not necessarily considered as a ranking factor in the Google search engine? We don't use these kind of social media information for ranking. So the simplest explanation for that is that all of these links from social media sites have a no-follow attached to them. Pretty much all of them, as far as I know. So it's not something where we could pass any signals from that. I think as a webmaster, obviously, if you get a lot of traffic from social, that's traffic. That's useful for your website. And making your website in the end. The very next question I was going to ask about was if I'm getting a very good amount of traffic from the various social networks, and I also want to get Google. So I already have proved myself that the content is worth reading and worth sharing. So is it going to help me in this ranking as I'm going to get very good amount of traffic from Facebook, StumbleUpon, and various social medias? No, we don't use that at all. And if I have analytics installed on my website, and the analytics are sending signals to Google that they are going through the different posts. We don't use analytics at all for search. OK, thank you very much. All right, let me run through some of the questions that were submitted. And then we'll open up for some more general questions from you all afterwards as we move along. Hi, John. When you refer to Google's quality algorithms, can you explain how many algorithms you're referring to? And including Penguin or Panda and Night Group since it's continually running. So in general, we don't talk about how many algorithms we have that are running. Publicly, we sometimes say we use over 200 factors when it comes to crawling, indexing, and ranking. But in general, the number of algorithms is kind of an arbitrary number. Like one algorithm could be used to display a letter in the search results page. It's not something where counting the number of algorithms that are used is really useful. So from that point of view, I don't have any numbers that I can give you with regards to how many algorithms are running on Google's side for search. We're an EMD that no longer ranks for our main keyword. We've lost over 45% of our search traffic over the last two years. What has happened? What can we do to solve this? So I didn't look into the specific site to see if there's something different happening there. But in general, when you're looking at site changes that happen over multiple years, then that's something that's not related to one specific thing on a website that you're doing right or you're doing wrong. That's usually a matter of a website maybe being really relevant in the beginning, and over time, maybe the quality degrading or the content degrading and the general relevance of the website dropping over time. That's something that is, from my point of view, completely normal. It's not the case that you can take a website and say it was number one for the past five years. Therefore, it'll continue to be number one for the next 10 years. All of these things change. The web changes, the whole ecosystem changes, everything moves on. And what used to be ranking number one, if you don't make any changes at all and don't adapt with the whole ecosystem, then that doesn't necessarily remain ranking number one over time. With regards to EMDs and ranking for the domain keyword, that's also something where I'd say just because a keyword is in a domain name doesn't necessarily mean that the website will rank for that keyword. So that's something where I'd say having a nice domain name is definitely a good thing. It's something that people can remember the domain name, go there directly. It's always a nice thing to have, but it's not the case that you automatically get number one ranking for the keyword that you have in your domain. Let's see, if a website doesn't implement Google Analytics or Search Console cannot negatively influence Google rankings, we touched upon this briefly before, and the answer is no. So as far as I know, we don't use Google Analytics at all for search results. So whether or not you implement that is, I think some of our tools are really fantastic to getting more information for you as a webmaster from what people are doing, but there are also lots of other tools out there that give you good information about what your users are doing on your website. So from that point of view, I would choose the tools that work best for you. Another common kind of combined question that comes together with that is if I use AdSense on my website, is that automatically better than if I use some third-party ad network? Or if I buy AdWords, does that affect my rankings? And for both of those as well, the answer is no. We don't use whether or not you're using specific Google services as a way of saying this website is more relevant or not. We want to make sure that in search, we're as neutral as possible, and that we look at the website as it comes, as users see it, and use that for relevance rather than which specific technology provider they're using today. What's the best way for companies who are under Black Hat negative SEO attacks to find recourse or an outlet to discuss it with Google? I think this is always a tricky situation because it's hard to tell what exactly is happening with the website. But my general recommendation there would be to make sure you use a disavow file, use the domain level entries in the disavow file to make it easier to keep up. So especially if you're seeing that a bunch of really random links going to your website, and you're really worried that Google might be using that negatively, then on a domain level, it's really easy to fill up the disavow file, submit that, and then you know Google is not going to take those links into account. So that's kind of the best approach I would find there. If you're unsure about what you should be putting into disavow file or how you should be getting this information or whether or not it's even the actual problem that is affecting your website, I definitely recommend posting in the Webmaster Health forums. We have Webmaster Health forums. There are also a number of third party forums out there where peers have some advice as well. And kind of look at your website and say, maybe it's not a Black Hat negative SEO attack. Maybe it's just that quality of your website isn't as good as it used to be. Or maybe there is something technical on your website that you totally missed. And you might be seeing a bunch of crazy links going to your pages, but perhaps the actual issue with your website is something completely different. So that's where I kind of try to chat about this with peers to get some more insight into what you could be doing differently. My partner is a pet portrait artist. And her website dropped out of search rankings for pet portraits. We don't know why. Could it be a hidden manual penalty? There is no message in Search Console. So from my point of view, there is no kind of hidden manual penalty on our side. If someone from the web spam team thinks that there is an issue with a website, then that would result in a normal manual action that you would see in Search Console. So that's something where you would get some information there, and you'd be able to respond to that. You'd be able to say, oh, well, this hidden text issue that was flagged by the web spam team, I will take that feedback and fix that on my website. And someone from the web spam team will take that and review it when you submit reconsideration requirements. So it's not the case that there is any kind of hidden manual action that is possible behind the scenes. It's really essentially all shown in Search Console. With regards to completely dropping out of the search rankings, it's kind of hard to say. So looking at the site briefly, it's something that sometimes is more visible in search, sometimes a bit less visible in search. And in cases like that where you don't have this kind of steady visibility in search, then these kind of fluctuations can happen over time. And sometimes it's something as simple as seasonal changes, where maybe you used to get 10 impressions a week, and then when everyone is on vacation, those 10 impressions disappear, because those handful of people that we're searching are doing something different at the moment. And that's not necessarily something that Google would be affecting manually on our time. We've been getting links back to our category pages through brand anchors. Is that fine with regards to SEO, or will it harm us? Should we disavow them? If these are natural links to your website, I will just keep them. I think that's perfectly fine. Some websites have a lot of links to the homepage. Some have links to individual product pages. Some maybe to category pages, tag pages. It really depends a bit on the website and on their users. So totally up to you. I wouldn't disavow these or block these type of links, if these are normal natural links that your users are essentially placing for you. You mentioned no follow links don't pass stuff, since Google works on do follow. If some links only give out no follow links and don't have any kind of followed links, how does Google treat those sites? We don't pass any signals through no followed links. So if a site only has no followed links, then there are no signals that we can pass on. So it's not the case that we would say we'll kind of ignore the no follow here because you're no following everything, or it's also not the case that we would say this website is no following everything. Therefore, we will demote it in search for being a jerk. Essentially, we try to just use the links as we find them and some are no follow and some aren't just normal links. What are URL errors? How can we fix them? These are probably crawl errors that you're looking at and probably in search console. So I double-check the Help Center article for crawl errors on our site. That should give you some insight on what might be happening there. Page rank passing from redirecting page, a one-year period is because of crawling. If a page is crawled the next day and redirect is found, should that redirect still be there for a year? So in general, I would say yes. If you're doing a permanent redirect from one URL to another, I'd recommend keeping that as permanent as possible. Otherwise, it's not really a permanent redirect, right? The recommendation that we have with regards to one year minimum is because, depending on the website, some URLs might get crawled frequently, some less frequently, and sometimes it can take up to like a half a year before we crawl a URL. And to make sure that we've at least seen that redirect a handful of times, I recommend keeping that redirect at least a year in place so that in the worst case, if we crawl something every half year, we'll have seen it at least twice. So that's kind of where that recommendation comes from. If you're talking about individual URLs rather than the whole website, then obviously you can kind of adjust there. But in general, if you're redirecting URLs, I still recommend keeping that in place as long as you can because that's kind of what people would expect if they visit the old URL that they get redirected to the new version of the URL rather than a 404 page or see something completely different. How long can it take for the algorithm to dismiss rankings that get a lot of spammy links? I've seen two examples of number one ranking sites have maintained their position for months even though they have very spammy links. This kind of question comes up every now and then where someone is saying, well, this is a spammy site and I reported it for spam and how come it's still ranking? I don't like that. It's kind of tricky in the sense that we do take into account a lot of different factors when it comes to crawling indexing and ranking. And it might be that we're totally ignoring these spammy links on this website but apart from those spammy links, there's some really good stuff on this website. Some really good signals that we're getting where we're saying, well, actually, this is a pretty good website. So it might still be ranking fairly well but not because of the links. It's more despite those links, it's still ranking. So that's something to kind of keep in mind that if a competitor is doing one thing really spammy, doesn't necessarily mean they'll disappear completely from the search results because of that. They might be doing other things that are really well done and my general recommendation there is to say, well, don't concentrate too much on what your competitors are doing and instead to be doing to kind of significantly move to the next level and kind of really make sure across the board that your website is the one that is the most relevant for these types of queries. We removed a few links coming to us. We tried submitting those links too but Google is not crawling them and fixing the new version. What can be done in a case like that? In a case like this, I would just let it kind of be updated on its own automatically. There is no need to kind of manually request a review of a handful of individual links. So I wouldn't necessarily worry about that. Let's see, what are we up here? For websites, geotargeting many countries, is there a real benefit in choosing unlisted in Search Console? So the unlisted setting essentially says you explicitly don't want to use geotargeting. So the main kind of reason you might want to do that, I think it's really rare that you'd want to do that but the main reason you might want to do that is if you have a domain name that has previously been really focusing on one country and suddenly you're saying, well, I want to kind of really get rid of this individual country focus and really strongly focus on international and in a case like that, we will kind of remove the geotargeting bonus for that specific country and treat that more as an international website. So that's kind of the situation where you're doing significant changes in your users targeting on your website. And for the most part, if you've always been an international company, then there's nothing you need to change here. Or if you're focusing on one country individually but you're also available internationally, that's perfectly fine. That's not something you need to change there. It's really more a situation of if you're making significant changes from a really strong country focus to a really global focus where you're really saying, I totally don't want to be focused on this country anymore then that might be an option. Let's see, will you crawl index and value content placed in talkable JavaScript tabs? There's evidence that you ignore JavaScript tab content. This is an interesting question because it always kind of combines a set of misconceptions and kind of misunderstandings, I think, with regards to JavaScript and tabs. So in general, at the moment, the situation is that if there's content on your pages that's not visible immediately, then we'll try to kind of treat that with less weight. So if you have CSS tabs, for example, that you can toggle in between and when we crawl your pages, we get the full content of the main tab and kind of the hidden tabs as well. Then that's something where we'll see those hidden tabs. Their content will say, well, this is not immediately visible, so maybe it's not the primary content and not give it as much weight in search as the primary content might have. And a pretty common situation where you see this a little bit more visibly is when you look at the snippet that we show in the search results. Usually, we'll try to pull that from the visible part of the page because that's what users would see when they go to those pages. So in regards to JavaScript tabs, it's kind of, I guess, the first step is whether or not that content is actually known to us when we load that page. So if you're just using JavaScript to tab between individual items on a page, just changing the DOM nodes from display none to display block or whatever, then that's something, from our point of view, is essentially the same as something like CSS tabs where the content is known and you're just kind of turning its visibility on and off. So that would be exactly like before. We would be able to index that. We'd generally just say, well, probably not as much weight as the visible primary content on the page. The third variation that we sometimes see is that you use JavaScript to actually load the content. So you click on the tab and then JavaScript in the background goes off and fetches the content from the server and then displays that instead of the old page, kind of in a way of doing tabbed content through your server. So you click on the tab, JavaScript pulls the content in and displays that for the user. From our point of view, that's tricky because we don't see that content. So that's something where we would probably completely miss the indexing of that content because Googlebot doesn't know where it should be clicking to see if anything changes on the page. So as long as the content is available when we render the page initially, then we should be able to use that for indexing. If the content is only loaded once you click on something, then it's very likely that we might miss that content because we don't know what we should be doing on a page. We can't click on every visible element on a page just to see if there's some JavaScript that gets triggered that goes off and fetches some more content. Just quickly related to that, I think the answer will be yes, but just want to make sure. We have our advertiser disclosure hidden with a JavaScript toggle, display block, display none, CSS. And just want to make sure that that's fine that the algorithm sees that it's there, even though it's hidden. That's perfectly fine. So these kind of duplicated text blocks are really common. And it's something that our algorithms have to deal with. So we see this text block as duplicated, whether or not it's visible is even secondary. But we see it's like duplicated across the whole website. Therefore, it's probably not something that's kind of really relevant for this specific page. Maybe there's like one page on this website that we can say this is relevant for. But it wouldn't cause any problems for the other pages. OK, thanks. John, can I step in with a question? I was wondering about the pagination, the real next prev CDS, where because we have so many pages, like 600 for one CDS only, I would love to put the no index tag for all the secondary pages. But I will let the follow in place. So for Google to know that this is a CDS of products. But I don't want the secondary pages because there are so many to be indexed, only to get to the products themselves. Is this a drawback in any way? That's fine. A lot of sites do that. So that's totally up to you. Some sites let us crawl an index page, I don't know, 100 down the line. And a lot of other sites say, well, the first couple pages in the paginated series are the important ones and everything else is kind of, if someone sees it, fine. But it's not the relevant content. The only reason I'm doing this, it's only because Google has a limited time to crawl a certain website. And because this website is so big, we thought it would be smarter to use that time in crawling specific pages, specific important pages without changing more frequently. I think it's a good thing to remove the no index. I mean, I don't know. It's tricky. I think that's fine. One thing to keep in mind is with the no index, we will still crawl those pages. So it's not necessarily helping the crawl budget. But what I would look at is we did a blog post about crawl budget a couple of months ago. I would pull out that blog post and kind of go through that and make sure that it's kind of all in line with what you're doing there. I suspect you probably don't have an issue with the crawl budget in most cases. OK. Thank you very much. It's very helpful. Hi, John. Hi. Last time I asked a question on the forum, but it wasn't touched. And my question is this. I work with my sister-in-law website. She's in that district, Calgary, in Canada. And she worked before with a company from Australia. I need heat. And that's a totally spamming company with a lot of complaints. And they build a lot of bad and unnatural things. I disavowed them. And I tried to remove some of them. And one of them, which she had done in Calgary, they moved all the links to this. And I still have those links in and so on and also on different tools that I chat with. Some of them ask me to pay $2 for everything that has them. And do you recommend to pay or not or what should we do? If you can't get the links removed and you don't want to be associated with them, I would just put them in the disavow file. And if they're in the disavow file, it's the same thing for us as if they're removed. So if they're saying, oh, this is such a hassle. You have to pay me for my work to remove this link. Then you can just put that in the disavow file and move on. So that's probably the easiest way to handle that. OK, OK, I understand. So I don't need to pay just to leave them on. But why they're still on Search Console and links? We show them in Search Console because they're still links. And we show nofollow links as well in Search Console. So even if you put them in a disavow file or if you make them nofollow, we will still show them in Search Console. But that doesn't mean that these are ones that you need to focus on. All right. OK, thank you. Sure. Hi, John. Quickly, just on the disavow file since you were talking about it, if you make a mistake and accidentally include a good link in the disavow file, if you put a new disavow file that doesn't have that in there, will it go away from the disavow? Yes. The new disavow file always completely replaces the old one. OK, great. Thank you. John, happy questions. Ask. Go for it. Hello, John. Yes. Yes, so happy questions. You mentioned for the nofollow thing that you don't pass any signals. So what is the case when on some websites or let's say some publications are actually mentioning about you? So those citations are mentioning of our brand name. Does it pass any sort of ranking sort of thing that can actually help to us in terms of like, you can say those are backlinking if you're not actually linking, but you're actually mentioning our name. So will it help? No, we don't use that. So those aren't links, and we don't pass any signals there. So that's something that can be good for your business because people see your name, and then they search for your name, and then they go to your website. But it's not something where we would say this is equivalent to a link. OK, fine, thank you. All right, let me run through some more of the questions that were submitted. And then we should have some more time for you all as well. I redid my structured markup a week ago, but nothing has been picked up in Search Console. How long does that take? Probably longer than a week. So this is something that depends on our crawling and indexing. And there are some pages we pick up fairly quickly for crawling and indexing within a couple of days. A lot of pages on a website are picked up over time. So that can be weeks or months, even, until we re-crawl and re-index them. And then past that, there's always a bit of a lag before Search Console starts reporting on that, usually just a couple of days. So a week is probably not enough. A couple of months, you should be seeing this change in the graphs in Search Console there. What you can also do to speed that up a bit is to submit a sitemap file with a new last modification date for those individual URLs. The sitemap file tells us that you've made changes on these URLs at this specific date. And we'll double check our systems and say, oh, this date is newer than we last saw this page from. So we'll go off and crawl this URL a little bit faster than we would if we just naturally waited for things to update. So that might be an option there to speed things up a little bit. I've added an API feed to show third party Google-approved reviews on my website. However, these reviews are collected by a third party. So am I correct in thinking that there's no SEO benefits here? Should I have my own review facility so I get the content benefit of showing reviews generated by us as opposed to showing someone else's? I think there are two aspects here. On the one hand, there's no automatic SEO bonus for having reviews. Having reviews that can be marked up with kind of the star review rankings in the search results is something that I think is a nice thing to have. So I double check our guidelines there with regards to which kind of reviews you can show on your pages and where they should be visible. So a really common mistake, for example, is showing the reviews across the whole website when they're actually attached to one specific product or service that you offer. So if you have one product or service that you offer, then only that landing page should essentially have those reviews on there. It shouldn't be that your whole website has the reviews for your business, for example. So that's one thing. The other thing is, of course, the content that's within the reviews. So if you want to rank for the text that people use for those reviews, then obviously having those reviews on your website is a good idea. It kind of depends a bit on what kind of reviews you're getting and what kind of content you have on your pages. Sometimes reviews can be a good way to product or service that you offer in a more natural way. So if someone kind of writes about what you've been doing, what your business is doing, what you did when you delivered this service, then that can provide a bit of content that we can use for ranking as well, where if someone else is looking for, I don't know, a gardener that watches out for my pets or, I don't know, some weird combination that you wouldn't necessarily include on your product page, but which is mentioned in the review, then that's something that we could pick up from those reviews. So those are kind of the main things I would watch out there. I accidentally got my staging site indexed. I think you're not the only one. This probably happens to everyone at least once. When I removed it, I got crawl errors in Search Console. Should I just set them to be fixed? They come back again and again. So if you remove the same thing, so if you remove the staging site or if you kind of blocked access to it and seeing crawl errors is completely normal, that's not something you need to worry about. It's not something you need to mark as fixed in Search Console either. The mark as fixed feature in Search Console is only for the UI and only for you so that you don't see those errors. And the next time we try those URLs again and see that they're still errors, we'll flag them again in Search Console. So marking as fixed is definitely not necessary here. If you know it's your staging site and you know Google was crawling it before, then Google is going to continue crawling that for a really long time. We have a big memory for these kind of things where you think, oh, there was content here before. I'll try it again next week and next week and kind of remember this for years and years even. So I totally wouldn't worry about that. I think those kind of crawl errors are just like what happens afterwards. It's not something that is a sign of something going wrong or something that you need to fix. It's mostly a sign that you've locked it down properly now. Here's a longer question about kind of replacing search results in Google queries. So I searched for my company name and Google thinks I made a spelling mistake in searches for something slightly different. And I don't want that to happen. And this is like really bad thing. I'm really upset about this. In general, we make these kind of automatic replacements when we think that users were actually searching for a different word than what they submitted. And sometimes I see this in the help forums, specifically with regards to companies that have names that are very similar to typos and quite often companies that are fairly new where Google systems haven't recognized that if someone is searching for this specific word, then they really want to go to that website. They don't kind of want to go to that rewritten version of the query. And this is something that Google systems learn automatically over time. There's nothing manual on our side or on your side that you can do to kind of force Google to like not correct that apparent typo. So this is something where I generally recommend on the one hand before going out and making a website that is essentially a typo of a very common word, maybe thinking about what you can do to build a brand that's not a typo of a common word. Because you'll always kind of run into this weird issue where you say my website is googlewith1o.com, for example, then people will have trouble remembering that. And it's something that from a, at least from a marketing point of view, probably doesn't make that much sense compared to having your own brand kind of being known for what you are rather than what some other word is. And on the other hand, this is something that Google systems will of course learn over time where when more and more people are searching for this specific kind of typo style name that they actually mean that website. So those are kind of the two options that I have there. Does Google remember pages with text that were marked as spam before? Let's see, we're going to launch an e-commerce site and recently thought about publishing a few pages about online stores in general so that it could start ranking before we kind of put our content on there. Google marked this as spam. And we're kind of worried, I guess, that the new website will be seen as spam as well. So one thing there is that if something is flagged with a manual action from the web spam team and you resolve that, then that's resolved. There is no kind of algorithm that kind of looks into the past and says, oh, you once did this one spammy thing, therefore we'll keep an eye on you forever. If you've resolved that spam issue and that's no longer the case on your website, then you're OK. We look at your website the way that it is now, the way that the current signals are, the way the content is now. So from that point of view, if you've resolved the spammy issue and you want to launch on this website, then I would go ahead and do that. I'm learning that the uncommon download error is increasingly common and most folks aren't getting detailed information or samples and there's no way to kind of diagnose or troubleshoot or solve. Yeah, so this is something that I've talked with the safe browsing team about as well to see what we can do to make that a little bit clearer. So the uncommon downloads error in general is when you download something that might be executable or a kind of a compressed file where we don't really know what's inside and that file is not downloaded a lot on the web, then we probably haven't scanned that file or nobody has scanned that file because it's so unique. So it's unclear if this is really a safe file or if this is potentially problematic file. And sometimes this plays in with downloads that are tagged that have a unique serial number in the executable because then that download, of course, will be unique to that user. But at the same time, there's no way for any system to have checked that download and say, well, this is actually a safe download. That's where we flag this uncommon download error. And we show that in Search Console so that you're aware of this happening. And in particular, if you're tagging downloads, then that might be something to rethink. Maybe there is a different way you can handle this. Maybe you can work with a product key that a user enters rather than the download itself, something like that. On the other hand, it's a sign that we haven't scanned these files. We haven't been able to check them. So what you can do in Search Console is request the review. And in a case like that, our systems will go off and double check these files and run them through our malware and security scanners. And if everything is OK, then we'll flag them and say, well, we've checked these files. They're OK. On the other hand, if anything is problematic there with those downloads, we'll give you a bit more information in Search Console about the specific issues that we found there and make it a bit clearer about what we've seen there. Just by seeing the uncommon downloads error itself isn't a sign that something is completely wrong, that you need to do something differently. Oftentimes, it'll just settle down on its own. If you want to make sure that things are OK, then click that review button in Search Console so that our systems can double check the content that you actually have there. All right. Let's see. A handful of other questions. Let me just double check them real quick. And then maybe we still have some time for questions from you all. Infinite scroll, is that blog post still relevant? Yes, that's still relevant with regards to no indexing, no following, the paginated pages, like 2, 3, 4, et cetera. As I mentioned before, totally up to you. You might want to do that if you want to. Totally up to you. Then there's a question pointing out a webmaster help forum thread where let me see what that is, just to kind of double check. So one thing that I noticed, yeah, I got a ton of new property emails. What's up with that? I noticed this as well. I think it started happening yesterday or the day before yesterday and looked into this with the Search Console team. And from our point of view, nothing in the sense that you're doing something wrong on your website. It seems more like something on our side that we're mixing something up there. And I believe it's related to the Search Console better that they're working on. So maybe there's some kind of interaction there that they need to double check. But it's not a sign that you need to make any changes on your website or that you got hacked or anything like that. But it is confusing. And I'm sorry to kind of fill your inbox with these kind of messages. How important is schema markup? Platforms like Weebly don't support it that much. Are there any alternatives? Schema markup, in general, structured data is useful for us to better understand the page and to have something that we can show a little bit richer in the search results. So things like review stars or images or if you have kind of markup for recipes on your pages and we can pull that out and say, well, this is a recipe page and there are these ingredients and this is the preparation time. And we can show that in the search results to make it easier for users to understand whether or not your page is actually the one that they want to go to. So how important really kind of depends on you what you want to do there. If you want to see your search results marked up in this kind of fancier way using these rich results, essentially then you probably do need to use some kind of structured data markup, whether or not that schema.org or other kind of markup that you're using on your pages is more up to you. Do relevant PBN links still boost the ranking of a website? So private blog network links are things that, for the most part, our algorithms are really good at spotting. And for another part, the web spam team loves to dig into these and take them out as well. So if you're seeing things that look like private blog networks and you're like, eh, I don't know what to do here. My competitor is doing this crazy thing here and I don't know what to take out of that. Then you're always welcome to pass that on to us and we can take a look at that as well. And I think the other one is with regards to international traffic, shall you slash EN slash DE or domain and assuming a traffic drop. I think there are different options with regards to geo-targeting. So I would primarily take a look at the big help center article that we have about this. And if you have questions about something specific, I'd go to the Webmaster Help Forum and get their advice because there are lots of really smart people when it comes to internationalization that can help you to figure out what you could be doing differently. All right, back to questions from you all. What else is on your mind? I have a quick one, please, John. If you have an old blog piece like the past event, is it better to redirect that, three or one redirect, to pass the link equity to, say, a new blog piece for the new event or similar event, or is it better to keep that kind of content page live? Because it'll be on the page. It'll be an old link, so it might have links in there that we could probably reallocate elsewhere if that makes sense. I mean, if you have a new page that replaces the old one, then redirecting is fine. On the other hand, if these are totally separate events, then redirecting could be a bit confusing, because it's not that the old event is replaced with the new one if it's a completely different event. But if you have kind of a series of ongoing events and you're saying, well, we did it in this city last time and we're doing it in this city next time and we're kind of like moving around and the old events are not relevant anymore and being replaced by new ones, then maybe that's something where a redirect would make sense. I wouldn't blindly do this across a website and say, all old blog posts should be redirecting to new blog posts. I think that's a lot of effort put in, and probably the gain that you get out of that is minimal. OK, thank you. All right. What more can I help with? This is a hard job. Just wondering if I can put it on. I think it's not working for this thing. But it could help with that. So it's a little bit difficult to respond to it. I can hardly hear you. There's a lot of background noise. I'll let you go through the QHR if you can just hear that then. Maybe you can hide it in the back. Yes, it is a little bit. OK. What is the most content on category pages helps in rankings and is considered main content? We do take content on category pages and treat that as a part of that page. So if, for example, you have a category page for a specific type of product and you have an introductory paragraph on top, then that's content on the page that can be useful for us. Some sites I've seen really overdo it and take a category page and essentially put a Wikipedia article in like five point font on the bottom. And from our point of view, that kind of drifts off into just plain keyword stuffing. But if you have normal content on a page and it happens to be a category page, then that's totally fine for us. That's just content on a page. John, if I can have an opportunity question. Basically, what I like to do when there is a simple web page, business page, I do it by code and I could be as simple as possible and light and fast. But when it comes to blogging, I install WordPress. For static website, I create a static XML sitemap. And for the WordPress, how do I integrate this and how should I do with this? So for WordPress or for other blogs, I believe from WordPress, there's a sitemap plugin that you can just activate in WordPress to generate the sitemap file directly. You can also submit the RSS feed to sitemap file for us. Feed has the same kind of information as the sitemap file, the URLs, and the last modification date in particular. And that's just as useful for us with regards to kind of this feed submission. So those might be the options that would work. Yeah, but that means that I should have two separate sitemaps when static or one dynamic. I would try to kind of keep the URL so that there's only one. So wait. So that URLs are only listed in one sitemap file. So if you have a static sitemap file that you generated for some of the static pages on the website and you have a blog also on the website that has a dynamic sitemap file, then those URLs won't overlap. And then you have two separate sitemap files that you can submit separately, which is perfectly fine. On the other hand, if you have a static sitemap file that you generate for some of these URLs and the blog has a dynamic version as well, then I would only use a dynamic version to make sure that what you submit in the sitemap file does not have conflicting information. All right. Can I introduce static links in the dynamic website and dynamic sitemap? I think that depends on the sitemap generator. So I'm not completely sure about the WordPress ones. I know there is a WordPress sitemap generator that has a lot of experience. And perhaps they do have an option for that. All right. All right. Thank you. Sure. All right. So let's take a break here. Thank you all for submitting so many questions and so many live questions as well and for seeing some new faces here. Fantastic. I'll set up the new series of Hangouts probably later today. So if there's anything on your mind that we missed, feel free to submit those there. Or in the meantime, of course, feel free to contact us on Twitter, post in the Google Web as a help forums, or contact us through one of the other means as well. All right. So with that, I wish you all a great weekend and hope to see you all again in the future. Thanks for that.