 All right, welcome, everyone, to today's Google Webmaster Essential Office Hours Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do is talk with webmasters and publishers like the ones here in the Hangout today. Lots of people managed to join, which is great. A bunch of questions were also submitted. If any of you are newer to these Hangouts, feel free to jump on in and ask a first question. If no one asks, try to have one. All right, I guess if nobody else wants to jump in. Yeah, since not. OK, what I wanted to ask, are the links from relevant sites indeed more important than links from not relevant sites? And what about links from generalized sites? So for example, if I have a site about medical industry, am I expected that a big percent of my backlinks to be from other sites from a medical industry already is no such thing? We try to understand the context of a website, in part, from the links. So if all of your links are from used car salesmen and you have a medical website, then for us it's hard to understand how this connection and this mix works together. So what if I have a medical website and the vast majority of my links are from generalized sites, like news sites, blogs, and things like this? That's perfectly fine. That's usually not a problem. So it's not something like I'm expected to have a percent of links from my industry. I wouldn't say that. I mean, it definitely helps us to understand the context of your website if we see where these links are coming from and they kind of match. But sometimes you get a lot of general links and that's the way it is. What about the language? Like if I have an English-based site and I promote it aggressively in Romania, for example, where I am from, and I get a lot of backlinks from a natural backlink, of course, from Romanian websites in Romanian language, would that be OK? Would maybe get full credit or you would see it as, well, this is something strange. The site is in English, but 90% of where links are from Romanian sites. For the most part, we would be able to figure that out. Things like the anchor text make it a little bit tricky. No, no, no, no anchor text. Nothing artificial, nothing repetitive. Yeah, no, no, I'm just saying, like, if all of the anchor texts are in Romanian, then we think maybe your pages are actually in Romanian as well, or at least partially in Romanian as well. So that's something that might happen. So anchor text should be of a brand name, for example, because Japan is not in Romania. It should be natural. You should let people link to your site however they want to. Obviously, obviously, but I was just thinking. Yeah, yeah. I mean, that's something you can't do. Yeah. Can I ask a question? Sure. Go for it. It's a very late question. So this Holocaust thing, Danny Sullivan reports that Google pretty much removed the denial websites. The truth is, and I think you would agree with this, maybe not, when you do a search for it, it does seem like all of these news-oriented sites pushed them down a lot, but they're still there. Google also said the statement to Danny saying that they actually changed the algorithm to promote more authoritative websites to help. I didn't see that per se. But what I think, and maybe you could tell me if there are any PR statements that you guys are being floated around that you could share with us, is that obviously maybe two weeks, three weeks from now, if the news around this query has died up down, that the news results will go away. Usually, that's what happens over time. And the other sites will probably bounce back up a little bit. You see this a lot. Could you just comment a little bit about what's going on and why we see these shuffles? I don't know what specifically is happening with those sites. So I haven't been involved with those discussions there. Great. OK, thank you. I mean, we do make changes in our algorithms based on feedback. We make changes in our algorithms based on re-evaluations. So these kind of changes could be completely normal. But I really don't know what specifically is happening with regards to those specific queries. OK, thank you. All right, let me run through some of the questions that were submitted. And as always, if you have questions or comments in between, feel free to jump on in. And hopefully, we'll have more time for your questions towards the end as well. Can I rank my website with just user experience without links for 2017? I don't know. It seems like you'd actually need to have some good content to not just user experience in order to have something for our algorithms to actually pick up. So just because your website is easy to use doesn't necessarily mean it's automatically relevant and helpful for people searching for that type of information. So that's something where I wouldn't focus on just one aspect of our algorithms, but really look at things overall. And if you build a website naturally, then that will include a whole mix of things automatically. When you check whether a website has significant unique content in comparison to other websites, do you look mainly on the amount of unique text that can't be found somewhere else, or do you use other much stronger signals? It's hard to say specifically what you're looking for there with that question. When I look manually to see if a website has unique content, then I try to see how much of the text on these pages is unique and not just rewritten or spun or kind of automatically translated back and forth, those kind of things. So that's something that I look at with regards to our algorithms in general. They look at a variety of things. So they don't just look at how much of the text on these pages is actually unique, or how many pages have unique content, but we try to look at the bigger picture of things. Do you check if a specific page on a website was useful for users regarding queries that the page ranked for? This was the only single search result the user clicked on for a specific query, so that page must be really useful for that query. If not for specific pages, do you check the amount of single clicks on a site-wide level to measure usefulness of a website? We try not to use clicks like that when it comes to search and ranking, because it's a really noisy signal. We do use that sometimes to evaluate our algorithms overall, so to see is this an algorithm that's generally headed in the right direction, or is this algorithm causing more problems for users than it's actually helping? So looking at a really bigger level view, we would take a look into where people are clicking, but for individual pages, less so. There's a really long question with regards to a site that seems to have gone down and ranking for some of the queries. I looked at the site in question. I think it's linked towards the end, but there wasn't really anything specific that I'd be able to point out there. So this seems like something where traffic is just changing over time. It's not that there's a manual action. There's nothing from the web spam side that anyone is taking action on this website, but really essentially just the normal changes in ranking as they could always happen could also be a result of the change of user behavior. So if people aren't searching for your website, then obviously we won't be able to show that in the search results, and people won't be clicking to it because there's nobody that's actually going to click through there. And search for that. So that's something where this might just be a normal change of traffic patterns for your niche. It might just be normal change in the way your algorithms look at your site overall. What I would recommend doing there is maybe posting in the Webmaster Help Forum to get some insights from other people, other webmasters as well. Maybe there are some small things that you could do to help improve this overall. Let's see. Would you say the strongest algorithms that can promote or demote sites nowadays look at the quality of a website overall and overall quality and quantity and quality of incoming links? It's really hard to say what the strongest algorithms would be because it really depends on the query and on what is actually happening there at the moment. So it's not the case that we have this mix of algorithms and they each have their own weight. And it's the same across the board. And it's always the same weight. But rather, that can change over time, such as, for example, if a presidential election takes place and obviously all election-related queries will probably try to pick up more newer content. Whereas if it's two years after a presidential election, then probably we'll try to pick up content that might be a bit older. So these are things that change over time, that change dynamically depending on the type of content that's out there. There's no single factor that's the strongest factor at all time. If there's a page ranking number one, Google sends a lot of traffic to that page. But users can't find what they're looking for on this page and search more. Is it that you flag and demote this page that's ranking but not enough users find what they're looking for or you flag an algorithm like this algorithm doesn't deliver and you have to change that algorithm? So this kind of goes back on to the previous question. Usually things like that are not something that we take a look at on a per-page basis. We would use that when evaluating algorithms overall. So it's not so much that we would say, oh, suddenly this algorithm is really bad. But when we're evaluating the different algorithms that we could potentially roll out, we'll look into this when we do our normal testing. So kind of like a normal website would do A-B testing, we have tons of tests that are running all the time, where pretty much every time you search, you're in one of these tests. And we try to see how our users responding to the changes that we've made, which might be small UI changes. Like a few pixels here or there is a slight change of the color or maybe a box around the search result. They might also be ranking changes. So that's something where we take all of these tests and try to figure out based on these A-B tests which ones are working well and which ones aren't working so well. We try to improve the ones that we see aren't working so well. And when we're happy with the test results, then that's something we'd probably end up rolling out. If a site is hit by Panda, can some pages that are really relevant still rank for popular keywords based on other signals? Yes, this is not specific to Panda, but essentially for every algorithm. Since we take into account so many different factors, if one site is really good on a number of factors and really bad on this other factor, then that could still result in it being shown for some queries or some of the time. So that's something where, from our point of view, by taking so many different factors together, we think we can get a better, bigger picture view of that website and show it in the right places in the right search results. I want to know about the impact of a new domain age on search ranking. How does Google consider the age of a domain? For the most part, as far as I know, we don't have any age factor for a domain. So it's not the case that we would say, oh, this domain name is at least registered for at least five years, or it's registered five years in the future already. And we should take that into account. But we try to focus on the current situation. Where is this website standing now? How is it doing at the moment? In search, how is the content at the moment? How relevant is this website at the moment? And we take that into account. However, that also includes a bit of the history as well, of course. Because if you're looking at the current situation at the website now, some of that will be based on what has this website been doing over the years and what has it collected over the years as well, which could be a lot of really good things. If it's been working on a lot of great stuff over the years, it might also be a mix, where you say, well, we've had a couple of bad years in between, and there's a lot of really crafty old content online as well. But the overall picture is pretty good. So we try to take into account the full view of what the website is currently doing and use that for search. Can you talk a bit more about intrusive interstitials from an SEO perspective? What are the differences between intrusive interstitials and YouTube ads? That's an interesting question. I don't know how Search would see YouTube ads in particular. That's something I'll probably bring up with the team when they're back after the break. But in general, from our point of view, if you're promising something in the search results that you're not delivering when people are coming to your site because you're showing a big interstitial, especially on mobile where you have to find the ics to actually close that interstitial and move on and actually see the actual content, then that's something where this algorithm would take effect. Sorry. Hi, my microphone wasn't working. I was just going to add something to a previous question. So regarding the age, though, it's in the patent somewhere, right? You have it in the patent. No idea. Yeah, is it right? It's not in the patent. I mean, we do a lot of patents. And just because something is patented or mentioned in the patent somewhere doesn't mean we're actually doing it exactly like that. So that's one thing to kind of keep in mind. I think it's fascinating to look at the patents that are made. But I wouldn't assume that everything that's patented is implemented exactly in that way. OK, it's just that there's hundreds of tools out there and they all look at the age and so on. Yeah, I mean, a website naturally collects some stuff over the years. So even if you look at something as simple as links, then that's something where if a website has been around for 10 years, it will have collected a little bit more than a website that just comes online this week. So these are things that kind of collect over time. But looking at just the current status doesn't mean that like, I don't know, how can I put this? We try to see what the current situation is, rather than saying, oh, well, this is a 10-year-old domain. Therefore, it has a bonus factor just because it's been registered for 10 years. If the current situation says, well, it's been around for 10 years, but actually, nine of those years were really bad and was run by a bunch of spammers, then that's not something that you'd want to count positively for a website. Right, OK, thanks. OK, but links age do matter? Links age do matter. I wouldn't necessarily say that, that older links are more important than newer links. But if a website has been around for a long time, then it will have collected some links over that time. OK, then I would want you to explain me something. Remember, I asked you about the site. I had to remove the site, which was removed the subdomain. There were a lot of backlinks pointing to it for years where there was nothing on the subdomain. The guy wanted to reinstate the subdomain, put back what pages that were backlinked, and you said that the old links would probably not count anymore. If it only looks at the present state, why would they not count? Google will crowd first page again, we'll find the backlinks, we'll find the page existing, we'll find contents, so it should give credit if what you say is true. But you said they wouldn't matter, so where's the trick? What's the trick? If everyone knew what the trick was, that would be hard. So what also comes into play here is sometimes we recognize that a new site is actually totally unrelated to an old site. So that's, for example, sometimes the case if you go out and buy an old domain name, it might have been, I don't know, a church website for 10 years. But if we recognize that your current website is really not the same thing as it was before, then we need to understand that difference and say, well, these old links, they apply to the old website, but they don't apply to the new one. And that only applies to subdomains and only full domains, like the main domain was always there. It's not like it would have been possible to change owner. It's hard to say, because sometimes we treat subdomains and subdirectories like separate websites as well. For example, on Blogger, these are separate subdomains, but essentially these are separate websites. It's not the same as one website that just has the same content on different subdomains. So the case where the age count is if you think that the site has changed hands. If you don't think it changed hands, if it has continuity, then age doesn't matter. Yes, I guess. OK, thank you. You think not to be good enough? No, I think yes, the way that you mean it, yes. Hey, John, can I jump in with something? Sure. I have a question about the mobile first index. Actually, a couple questions, because this is kind of a big change. And a lot of SEOs that I've been talking to are a little confused about it. So if you don't mind if I ask a couple questions about it? Sure. So the first question, will desktop sites be ranked on mobile site quality? Yes. Yes, OK. So the desktop sites will be ranked on the mobile site quality. So if the mobile site has poor quality, desktop ranking could suffer. Yes, so basically, we'll pick the mobile version of a URL as the canonical version, and we'll kind of extract all of the signals that we need from that mobile version. OK, OK. So that makes sense, because that was kind of the way it was working with desktop sites, that if a desktop site had quality issues, the mobile rankings would also suffer. So now it's just reversed. Exactly. OK, fantastic. Will mobile site speed affect desktop ranking? I think at the moment, we don't take speed into account for mobile. OK, so if that's something we're looking into, to figure out how we can best do that. But at the moment, I don't think we take that into account at all. OK, fantastic. Here's another one. You don't take speed into account on mobile rankings, but you do on desktop rankings currently? Yeah, I was going to ask. No, we don't take the speed from the mobile sites into account. But that's going to flip. So you will with the mobile first index, probably. I don't think we'd be able to do that from the beginning. That's a good point, though. Maybe we should clarify that somewhere. There's one aspect there that does sometimes play a role there in that if you look at things like page speed insights, we'll put together a mobile score for a page. And if a website is technically mobile-friendly and that the UI kind of works on mobile, but it really has really bad page speed insight scores, then that's something where we might also think that this probably isn't a good mobile page. But for the most part, in the same way that we use speed for desktop, I don't see us doing that in the beginning. That's something we'd like to do in the long run, but not in the beginning. So you might reverse this whole thing, right? Because this is really an experiment from what I understand. It's an experiment on your end, and you might even just reverse it and bring back desktop and mobile like it was, right? We haven't even done it yet, so. No, no, I know, but I'm saying once it's launched. Yeah, I mean, I think usually the plan is that we test this extensively so that when we do launch it, we're really happy with the results. It shouldn't be the case that we do a bunch of tests, and they all come back negative, and we say, oh, man, we'll just launch it anyway and see what happens. So that wouldn't do anyone any favors. But you guys are looking at releasing this pretty quick, like we should expect. I know you guys never pronounce, pre-announce algorithms, but we should expect the first quarter or you guys are looking to at least do it as fast as possible. We're doing a lot of tests at the moment. So I don't see that happening that quickly. There are some things where side owners might also have to make some changes. And if that's the case, then we want to make sure that we give them enough lead time to actually make those changes. Here's another interesting one that was one way, and I think you recently said switched. So previously, as I understand it, Google would simply ignore any CSS hidden content, any content that was hidden under a tab or whatnot, like display equals none or display hidden or whatever. I think you recently said in one of these hangouts that now, at least on the mobile site, you're not ignoring the hidden content. Now you're reading all the content. Is that correct? That's the plan for the mobile first index. I see. So you're going to parse the entire page. The user agent is going to be mobile. You're going to parse the entire page. And so I presume that you're going to rank both the mobile site and the desktop site on all the content there, whether it's hidden or not. Yes. OK. That's a plan. Well, that's interesting, because that changes things somewhat substantially. I mean, a lot of e-commerce sites will have a lot of product descriptions and things like that that are hidden under a tab. I'm guessing you guys are pretty good at knowing what is the main content on this page. And so you're going to kind of, correct me if I'm wrong, please, you're going to kind of demote or ignore slightly boilerplate content on any kind of page, whether it's an e-commerce page or whatnot. So it doesn't really matter anymore for you guys, I'm guessing, if it's hidden or not, because you guys are pretty good at determining what the important content on the page is anyway. Usually, yeah. Usually, we can figure that out, especially on e-commerce sites, there's just so much boilerplate that's repeated across the site that we can treat appropriately. So what's going to happen now is I'm going to get a client. He's going to come email me freaking out. All my product descriptions are copied from my supplier, and now Google's reading it all. Should I rewrite this content? What should I say? John, help me out. What should I say to this person? Yes, pay me $10,000 to write all this content. Or should I say, no, no, you don't have to bother writing all this content? I don't know. How much do you like to send out bills? I don't know. No, sorry. I would rather do the ethical thing, believe it or not. No, it's something that I think applies to e-commerce sites now as well, so not specifically to the mobile first launch. But in general, if the main content on your page is copied one-to-one from other sites, then what will happen is we'll index these pages because the rest of the page is kind of unique. But we'll try to show the most appropriate one in the search results. So you will sometimes see that if you do a query for something, you go to the last page of the search results, it will say, I don't know, there are so many duplicates found as well. You can show them all. And that's essentially what's happening there is we recognize that that text snippet is actually the same across a number of different pages. We've indexed them all, so we know about these pages. But we're just showing a sample of these based on what we think is the most relevant for the user. And for e-commerce sites, usually that's less of a problem because you kind of have your unique niche where either you're locally active or you have a unique spin on things. You kind of have a different user group than other sites might have. And that's how we would show those sites. But we already recognize that this text is duplicated across a number of different websites. So we're already kind of filtering that or potentially filtering that. And if this is really a website that has its unique spin on things and has kind of its unique group of users, then rewriting those descriptions is not necessary all the time. But obviously, if you want to make sure that your pages rank on their own and are shown all the time, then having clean unique descriptions kind of helps to help there. OK, I understand. And so my last question for the mobile first index, I think you've answered it, is that I was going to ask if this was going to change how you're filling in your entity database, that if you're crawling the mobile site, are you going to have a separate entity for the mobile site versus a separate entity for the desktop site? But if you're just reading the entire page and creating an entity based on all the information you find there, actually, probably a number of entities in most cases, then my question is moot. Is that, am I have the right thinking? Yeah, yeah, pretty much. Cool, OK, thanks, John. All right, let me run through some more questions that were submitted, and then we can open things up again. We have three different websites with hreflang correctly implemented, but in the UK, like one version is poorly indexed, so searchers are finding the wrong version. Is there a way to force Google to show the ENGB pages? No, you can't force Google to show that. The hreflang is a way of giving us this information that these pages are connected. What you can do, however, is if you recognize that the wrong user is reaching the wrong version of the page, or the right user is reaching the wrong version of a page, is maybe show a little banner on top telling the user that actually, if you're from the UK, here's a version with British pounds as a price, for example. So that's one thing you can do. You can't force them to go there, because if you redirect all users to that version, then Googlebot will be redirected as well, and you'll kind of lose the other versions of those pages. I have a page that was written for the EN locale, and Google is ranking well across different English-speaking regions. If I add a Canadian-specific version of the page and include an hreflang, will Google consolidate the ranking signals for both pages, or will the new page cannibalize the original, hence reducing its ability to rank internationally? Maybe. So what will happen here is if you set up a separate page for the Canadian version, we will index both of those versions separately, and we'll collect signals for both of those versions separately. And what will happen there is we'll try to show the appropriate version in the appropriate search results. So if someone from Canada is searching for your content, we'll try to show the Canadian URL there if we were to otherwise show the generic English page. So what you can kind of estimate here is how many times is your generic English page being shown to Canadian users, and assume that we would swap that out and show the Canadian-specific version in that case. So it's not the case that you would automatically rank higher, or that you would rank differently. We would just try to show the appropriate version at the appropriate time. We currently use CSS text animation through Visual Composer on our website. When we do fetch and render on Google Search Console, the page is blank for the text animation would be. Would this prevent Google from crawling the copy? And will this negatively affect our ranking? So as far as I know, there are some animation types that we don't support from Google Box rendering. And what might be happening there is that this is happening with your page there. What I would do here is set up a simple test page, a simple static HTML page that uses the animation that you're trying to do, and narrow things down and figure out what exactly is causing Google Bot to stop rendering that page. And when you figure that out, then that's something where you can make an educated call. Does it make sense to have a fallback on my pages so that Google Bot can render this text properly? Or is this text still being picked up because it's in the static HTML part of the page and doesn't need to be done differently? So just because it's not shown in Fetch and Render doesn't mean that we don't use it at all. Another simple test you can do to get a rough order of magnitude if this is being picked up at all is to search for that text in quotes. And if your pages are still being shown, then things are probably OK. And often we also see that these type of animations are done for text elements that are not critical on a page. So there'll be things to make it look nicer and kind of add some flair to the page. But it's not really something that's critical for the pages' indexing and ranking. And in a case like that, obviously it doesn't really matter that much if that text animation is indexed or not. I was wondering if and when the education event markup is used in search results as a rich snippet. A site I'm working on uses that but isn't rewarded with a rich snippet. I don't know specifically about the education event markup. I would probably start a thread there in the help forum and probably someone will pick it up and send it my way when you have an example there. I know there are lots of types of markup that you can use from schema.org that we don't use at all in search. You're welcome to use us on your sites if you think they provide value otherwise or if you think that Google is definitely going to use us at some point in the future. But we can't guarantee that we'll be able to do anything special with it. So if you really need to limit the amount of work you put into your site, if you have a limited budget for developers, then I would focus on the types of structured data that are actually visible in the search results. Would I gain any benefit from adding a home to the start of my breadcrumb? Currently, it's not included due to the way the system is set up, but I feel to offer full hierarchy accessibility would be useful. Would this strengthen my internal links perception? So I think from a ranking point of view, from an indexing point of view, that would change nothing. If it adds value to your website for your users, then feel free to go ahead and do that. But it's not the case that we would see this as a ranking signal and suddenly rank the website a little bit higher. I have a site that's displaying a warning in the search results. Your site isn't mobile-friendly, yet it passes the mobile-friendly test. What's the reason for this apparent contradiction? So I took a quick look at this site. Maybe taking a step back, your site isn't mobile-friendly. Label that we show in the search results is shown only to the webmaster, so only to someone who has that site verified in the search results. So we're specifically trying to catch the situation where you're searching for your site and we have something that we'd like to tell you. And essentially, what's happening there is we recognize that maybe there's a mobile version of the site, but it's not such that we would actually use it as a mobile version. So we show you this warning. I took a look at this and looking at the mobile-friendly test, I can kind of see that there's a lot of small text there as well. It has technically passed the mobile-friendly test and that it has a label. This page is mobile-friendly. But looking at the PageSpeed Insights test, it actually comes back with a pretty mediocre score. So for me, it's showing 71 out of 100 for mobile. The screenshot that's shown also has a lot of really small text on there. So this is something where probably from a technical point of view, we'd say, well, it has technical UI elements that we're looking for for mobile-friendly site, but it's not really a mobile-friendly site in the sense that we would be able to recommend this to mobile users. So that's something where I take a look at the PageSpeed Insights results and try to find a way to improve that. Why crawl error never give even a 50% accurate report? Not really sure what you mean there. The crawl error report in Search Console does focus on what we actually have seen from crawling the site. So that's actually a pretty accurate view of what we see when we've crawled the site. Obviously, the thing to keep in mind there is that not all crawl errors are critical. So it's completely normal for a kind of a healthy website to have a lot of pages that return 404 in that all of the invalid URLs on your site, they should be returning 404. And someone might be linking to a lot of random URLs on your website, or we might have collected a bunch of random URLs on a website over the years. And if they're invalid, if they no longer exist, they should be returning a 404. And that's actually a good sign. So it's not something that you always have to fix every crawl error that does show up, but rather take a look at the errors that we found. And if there's something that you think should be actually indexed normally, then obviously that should be resolved and kind of cleaned up to make sure that it's not returning a crawl error anymore. Why is Google changing my page's title in the search results? We do change a title sometimes. We sometimes change them based on the query as well. So this is something where we're trying to figure out what your pages are about and trying to make it clearer for the users that actually, this is a really good page on this topic. You should take a look at this. So this is something that more often happens when we recognize that a title has repetitive words in it when it looks like a marketing blurb rather than a descriptive title. There's some information in our Help Center about how we choose titles, so I take a look at that. John, I have a follow up on the 4041. OK. One thing I saw a friend of mine doing, I'm not sure it's a good thing, but I was thinking of asking you. He is actually tracking all these 404 pages. And instead of showing 404 errors and returning 404 codes, he creates practically blank pages with a 404 message but without returning the error code and saying the page is not here, it shouldn't go home. But he's thinking that this way he is keeping the link credit from those links because those pages actually would report as 2100 existing pages and we link back to the home page. Is that a smart thing? Is that a bad thing? Is that an illegal thing? Do you get any credit this way? Would you get any credit for 404? You don't have any advantage of doing this. So that's maybe just as a first step. So it's not that we would see this as being extra value that you're providing. What would happen is we would see this as a soft 404 page and we would treat it internally like a normal 404 page. So that's something where you're not returning a 404 to us, but we can tell that it's actually a 404 page and we'll treat it like a 404 page. So from that point of view, they don't have any advantage from that. The problem that this causes is that we tend to crawl these pages more often than we would crawl a 404 page. So if we see that it's a 404 page, we'll probably not crawl it as frequently. Whereas if it returns 200, we'll probably call it with a normal frequency. So what you're doing there is you're kind of sending us to all of these pages that don't exist. We crawl them like they're normal web pages, but then we ignore them in the end, because we recognize, oh, it's kind of like an error page. We can just ignore this. So they're not doing themselves any favor by doing that. And such link saver to hard 404s or soft 404s don't add to domain authority rank, whatever. OK, thank you. Exactly. Let's see. I am seeing extremely inaccurate video search results. There are post articles showing up in video search results for recipes where the post article doesn't have a video at all. Here's an example. Does the video search support rich snippets for post articles with YouTube videos? I use JSON-LD video schema to mark up my YouTube videos on my recipe pages. OK, so I guess these are two types of questions. On the one hand, we're showing video snippets for pages that don't have a video. On the other hand, we're not showing it for other pages that do have videos. So in the cases where we're showing the video snippets that you don't have a video, that's probably a mistake on our side. So having some examples there would be useful for us to pass on to the video indexing team so that they can take a look at why they think there might be a video here and when there actually isn't. If we're not picking up the video markup, then that's something I double-check the markup that you're using there to make sure it's really the normal video markup that we have at our help center. You may also want to use a video sitemap for these URLs. Even if you're hosting your video on YouTube, you can submit a video sitemap for the landing pages where you have the videos embedded. So that helps us there as well. So for the first one, we're showing video snippets for pages that don't have a video. Those examples are good to have. And for the second one, I'd really make sure that you have everything set up properly for those videos. I just moved my site from HTTP to HTTPS on Google Cloud platform. I've created a VM instance, let's see, and it goes on that the certificate, when I test it with the SSL Labs tool, there's an incomplete chain error that's being shown there. So the SSL Labs testing tool is a great way to double-check that your certificate is set up properly. If you see any issues there, I would recommend going to either your hoster, if you've got everything from your hoster, or maybe to the person who issued your certificate so that you can follow up with them to see what specifically is the problem here. It's kind of tricky for me to do that on the fly. But especially when it comes to the TLS certificates for HTTPS, you do have to watch out for all of these details. Otherwise, browsers might kind of look at this and say, no, I don't know if this certificate is actually trustworthy or not. And they might not show it with the green label for HTTPS. But once the green label is shown, there is no difference between what type of certificate, how much is the guarantee, how trustworthy is the site-wide, not including subdomains, not. So it's not to say if your certificate is more expensive, more trustable, so it gives more credit. For as long as it is one of you, no. So from a search point of view, if it's a valid certificate, if we're indexing it with HTTPS, then everything is fine. There can be differences, obviously, from a user and from an encryption point of view between different certificate types. But from a search point of view, if it's a valid certificate and if we index a page with HTTPS, then we'll treat that normally. OK, thanks. With all the changes to search results, I'm wondering when Search Console will be updated to correspond to these changes. For example, Search Appearance Report would be great if it could report on clicks and impressions in the Knowledge Graph or the Quick Answer box. Also wondering, do the current clicks in the Quick Answer box or just clicks from a traditional ranking count? So we have a great Help Center article on this, actually, that includes all of these different types of search results, where we show what we would count as the average top position in these cases, where we would count the clicks and impressions for these type of search results. And we do try to count all of these when it comes to the Search Analytics report in Search Console. When I search in traditional Chinese, simplified Chinese results also show up and vice versa with bolded keywords in the snippet. I thought Google treats different scripts as different languages. Am I wrong? Or the script variation is just one of many different ranking factors. I don't actually know how things are handled. In this case, it's hard for me to judge the difference between the different Chinese scripts because I don't understand them. But what might be happening here is that we're recognizing these as synonyms. So we recognize that people often search for this variation. Sometimes they search for this variation. And maybe they're searching for the same thing. So we can perhaps treat them as synonyms. So it's not so much that we would translate these and try to see, oh, this word translated means this word. But rather that we recognize that these are probably synonyms, maybe written in a different way, maybe even with a different script. But we think they're, for a large part, probably the same thing. So we'll treat them as being synonyms when it comes to search. I don't know if this is a good thing or bad thing. From your point of view, if you think this is something that should not be happening in Chinese, then feel free to send me some examples, ideally with screenshots. And let me know some details that I can forward onto the team about that so that they can take a look there as well. I recently migrated a website with a new design. The website was totally off for four days during the migration and fixing the new design issues. Now the website is OK. Much better page speed score, but organic traffic has decreased by 50%. How long do I have to wait? Hard to say. So on the one hand, if you've taken your website offline for a couple of days, then we will probably end up dropping the URLs that we try to crawl during that time. We'll pick them up again, of course, when the website comes back. Ideally, if you're changing the URLs, you would have redirect setup as well, from the old URLs to the new URLs. So we can forward all the signals from the old website to the new URLs. Or even better is if you're using the same URLs as you were before, then we can just reuse everything as before. And what would happen then is as we recrawl and reindex the website that's active again, we would just be able to kind of bubble that up into probably more or less the same position in the search results. The other aspect, however, is that if you've just created a completely new website or completely redesigned things and now your whole website has a different internal structure, has content that looks very different, then that's something we first have to figure out and understand that actually this content here is now the primary content here, and these internal links should still count this way. That's something that can take quite a bit of time to actually figure out. And especially if the internal linking has changed, if the internal URLs have changed, then that's kind of like a new website that's being put up on an old domain. And we have to figure out, like how do we treat this new website on the basis of this old domain? How is the new content related to the old content? Can we forward some signals to the new content, or do we have to start over again and kind of learn the whole website again? So the more changes that you make, especially with regard to migrated website, if you change URLs, if you change the design, if you change the internal linking, the more you will see fluctuations in search. And some of that will settle down after a brief time when we can re-crawl and re-index everything. And some of that might be settling down into a different position in the long run, which might be better than before. If we can pull out all the content much easier than before, it might be worse than before, if everything is kind of hidden away and hard for us to crawl and index properly. If someone redirects an entire website to my domain, it might be a black SEO tactic. How would it possibly affect my website? Can I use a disavow tool in this scenario to prevent any possible penalty from the redirecting site? Can I clean up the data in Search Console by any means? So for the most part, our systems recognize this situation fairly easily. This is not something new that has just recently happened. People have been doing this for as long as I can remember. So our systems are pretty immune to just random websites redirecting to your websites and kind of passing on any problematic issues there. So that's something where, for the most part, I would just ignore this and move on. With regards to this disavow tool, the disavow tool is specific to links to your website. So as far as I know, we wouldn't track redirects as links to your website. If you see it listed as a link in the Search Console tool as an external link, then that's something you can use a disavow tool for. Otherwise, this is something that essentially you can't block from a Search Console site. From the minute I press Submit, and I get a reply back in my webmaster tool saying that this has been processed and so on and so on, how long does it take until it's processed? Like, from the minute I press Send. So it's processed immediately in the sense that it's in our systems, and the next time we re-crawl those pages, we will drop those links. That the unknown, I guess, is the next time we re-crawl those pages, which can vary. And sometimes, especially if you're disavowing a whole domain, then obviously we'll be able to disavow some of those links a little bit faster because we re-crawl them a bit faster, and some of them take a bit longer. So the processing is done immediately. The re-crawling and dropping of links is somewhere between days and weeks, I guess, somewhere around that. You're making that faster for 2017, are you? I don't think so. No. I don't see that happening. It's essentially we have to reprocess those pages and then drop those links. But usually what happens there is if this is a link that has a lot of weight, then we will re-crawl it a little bit faster anyway. So it's not something where the important links to your website would not be re-crawled for a month. OK. Thanks. All right. Let me see. One question here with regards to the site map. I'm using Magento to do it, to submit a different site map for products and categories, or is a default site map enough? You can use separate site map files if you want to. You don't need to do it. Site map files are essentially machine readable files. So you can submit them, however, as easy as possible for you. And on our side, we will just collect all the site map files and process them regardless of how you submit them. Hey, John, I had one follow-up question to the mobile first index, if you don't mind me jumping in. I forgot to ask, so once you make this switch, are you going to rank desktop sites on the desktop design or on the mobile design? On the mobile version. OK, because you guys can parse out, at least it seems to us, you can parse out the desktop design because you show it to us in Search Console. You say, this is what it looked like. Yes. What we're primarily doing is switching it from indexing the desktop version to switching to the mobile version. It would be harder for us to actually index both versions and index them kind of separately and rank them separately. So we're essentially switching to the version that most users would be seeing when they're using Search. I see. OK, fantastic. And the last question on that then is getting into AMP. So then AMP kind of makes this a little bit more confusing in that people are asking now, should we be switching to, now you can be moving to a mobile first index, should we just jump over a mobile design and go right to AMP at that point? You can do that. Sure. I agree that it makes it a little bit more confusing because there are more options, but you could do that. And there are some sites that have the AMP version for desktop as well because AMP is a by-designer responsive layout system. So you could switch your whole website to AMP and that would mean your desktop version would be an AMP page, your mobile version would be an AMP page. And when we show it in the AMP viewer, we can also show the AMP page there. That's theoretically possible. Whether or not that makes sense for your website kind of depends on the functionality that you need. And of course, the capabilities that you have when it comes to designing and developing a site like that. I think at the moment, if you want to make everything on AMP, you pretty much have to create your own templates, have to kind of do a lot of hard work with getting your CMS there. So it's probably not something that the average kind of web designer website would do, whereas kind of taking that first step and saying, well, this is my desktop and my mobile site and here is the AMP version of those pages, that's something that's a lot easier doable with some simple plugins for a lot of CMSs. OK. So here's the question that everyone wants to know. I mean, with HTTPS there's a slight ranking bonus or if you have the question of HTTPS or not, you'll choose the TLS version. The same question for AMP. I mean, if you have the choice between an AMP version of a site and a non-AMP version of a site, all of the things being equal, are you going to choose the AMP version instead? Probably not. So at the moment, we don't have any plans for an AMP ranking boost in that sense. So it's not that your site would rank higher if you had an AMP version. But if you're on a mobile device and you search in a way where we could show you the AMP viewer, then we'll probably show the AMP version of the page in the search results. So it's not so much that we would rank something higher, but we would show the equivalent version as AMP if we can do that for that device in the search results and show that in the search results. So it's more like hreflang? Yes, exactly. OK. Because I listen to, see, I watch these things. And I listen to Paul. I don't know how I pronounce his last name, Backhouse, your AMP guy. And when he was doing his presentation a couple of weeks ago, it sounded to us like he was talking like there was going to be, obviously, AMP was going to move from new sites to any kind of site and that the AMP site was going to take priority in terms of the rankings. I think what he was referring to is if you have a situation where you have a normal setup desktop and mobile site and an Android app, like a native app, where currently we would otherwise show the app indexing results first, if you also had an AMP version, then the AMP version would trump the app indexing results. OK. So that's a kind of a special scenario where if you have this setup, which I know Barry has as well, and Barry has brought up this point as well where he's saying, well, maybe I want my app to actually rank instead of the AMP version. And at the moment, we don't have a setting for that. At the moment, it's going to happen two and a half hours? No, not two and a half hours. Definitely not. I don't know of any plans to add that kind of a setting there, but that kind of feedback is always valuable to have. And I guess it's tricky because we're all making assumptions on what the users prefer to see. Like, do they prefer to see the AMP version or would they like to have a link directly to the app instead? I think that will probably settle down over the course of the year or in the future. But I don't know what the final state will be if it'll be a setting or if it'll just be, well, everyone is moving away from apps and moving directly into implant apps or something. I don't know. So, Chad, it's the top of the hour. You want to leave us with the end of the year of maybe three things to look at for 2017 and assume you did everything right in 2016. All right. So I would say mobile and mobile and mobile. That's really easy, right? No, I think, especially with the change for the mobile-first index, one of the big things that I suspect SEOs in particular will want to look at is how the mobile pages are actually SEOed. So if you've done on-page SEO for your desktop site, then usually you've spent a lot of time looking at the HTML code for your desktop pages. But the mobile page is essentially just like a mobile version of the same page. You haven't really spent time to figure out how do I need to optimize the code there to actually be done in a way that's proper on-page SEO. So I suspect that's something where people will want to spend a bit of time on trying to figure that out, on trying to make sure that the tools that they use are actually focusing on the mobile version of the page. So if you use a crawler for your website to see how it's internally linked, then make sure you're actually testing the mobile version of the page and not just the desktop version. So all of these kind of third-party tools that you're using to evaluate your pages, kind of the understanding of on-page SEO, make sure that you apply that to the mobile version of the page as well. And don't just focus on the desktop version. But if you have responsive design, really, there shouldn't be much of a change. Yeah, sure. Yeah, I mean, if that's one of the reasons why responsive design is so great, because you have all of that in the same place. You have the structured data in the same place, hreflang, all of that just kind of falls into place. But if you don't have responsive design and a lot of sites don't have that, maybe from like historical reasons or maybe just because they're different teams responsible for the desktop and the mobile version of the site, then obviously that's something you'll want to work on. OK, John, going back on the arm thing, I have a question related to this, not really revamped, more like we were CDNs, with the idea of distributed networks, where your site basically is not really staying on your server or your RPE anymore, but it is located in different places in the world. I always avoided using CDNs because I was afraid that I may land on a bad neighborhood, to say so, and being penalized because a lot of our sites are using them and where it starts from the same servers and things like that. And I always prefer to have my site on my own server, on my own IP and things like that. Am I wrong to being so cautious about that? For the most part, I don't think you'd have any problems using a CDN. So I haven't personally seen any site that had any problems with using a common CDN, where there are lots of spammy sites on. Cloudflare, Amazon. Yeah, I wouldn't see that as being a problem. So usually the same IP problem is really a situation where we're talking about 10,000 spammy websites on one IP address, and then you have your one normal website on there as well, where maybe Sphalmone from the web spam team would say, well, this whole server is just pure spam. We're just going to treat the whole server as being pure spam. But that's a real exceptional situation. I haven't seen any false positives in that regard. But if you're looking at a common CDN, then none of that really applies because you have this kind of natural mix of some good, some bad sites on the same CDN. Things being discounted because they come from the same CDN? No. So you're basically saying that in 2016, 2017, unique IP is not even needed anymore? For the most part, you don't really need that, no. OK, John, is there time for the last question? Go for it. Thank you very much. So yeah, our goal is to keep the index as clean as possible for a client's website. And that's why we want to know index or delete about 500,000 pages because these pages hardly get any traffic since its creation, so I don't know, for 12 months or so. So on the basis of the log files, we analyzed Googlebot's scrolling behavior. And we came up with the following strategy. We will set a bunch of pages, let's say, 10,000 to new index or 410 per day add these pages to a separate XML set map and repeat this step every day until all pages are no indexed. So what do you think about this strategy? And does it really matter, or can we also set all pages to no index or to 410 at once? What are your thoughts? I think both would work. I would definitely in the beginning pick a small sample set to practice with so that you know what to expect. Like, is Googlebot going to crawl those pages immediately or not just so you know what will happen there? But once you're kind of sure that you're doing it in the right way, then I would just throw them all in there and have them all no index at once. OK, thank you. All right, great. So yeah, I guess this is the last hangout for this year. It's fun again. Thank you all for all of your questions and comments over the years. I'll definitely set up more for the next year as well. And hopefully I'll see some of you there. Maybe we can talk about mobile or desktop sites, like maybe having a desktop friendly version of your pages as well will also be a factor into it to think about. All right, thank you, John, and have a great new year. Yeah, thank you. Have a great year, everyone. Happy New Year. See you next time. Bye, everyone. Bye. Happy holidays, folks.