 Is it the green button? Yeah. Yeah. OK. Welcome, everyone, to today's special Google Webmaster Central Office Hours Hangouts to gather with some awesome SEO folks in the Czech office. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland. And let me switch over to Pavel and the folks in the office here. Yeah. So John, please be welcome here to our second opportunity to have direct content with the Webmaster tools team and you being the head of the most discussed questions regarding Webmaster's concerns, not only Webmaster tools or Search Console, but also the other facts about ranking itself. And I guess we're going to do a little round up just to see who is in the room, who is going to ask you the most annoying questions or more pleasant questions, we will see. And we also have some questions on Google Plus. So if you are watching us live, then please go ahead and do a plus once for the other questions that would be quite useful. And please retweet the link so that many people can join us as possible. And if I may ask for a little introduction, I'm not sure if you are visible, so I will just do a little tweak. OK. Hello. My name is Dnieck. I'm a CEO at the company Medio Interactive. And I really like technical, so I'm interested in crawling. I love to do some analysis of backlinks and visualize the web data. Thank you. Hey, John. I'm Filippo Stiles. I'm owner of a small SEO startup in Czech Republic and also a SEO specialist for one of the biggest e-commerce websites in Czech Republic called Eurika. And I think we've met before on some of your Google Hangouts, or when past the Hangouts. But this is the first time I see you live, so it's nice to meet you. Hello, John. I'm Jaroslav Leninga. I'm a SEO specialist in salesman.csr, a search engine in Czech Republic. I think that's all. Hi, Joe. My name is Pavel Unger, and I am behind the idea of meeting sales specialists in Czech Republic. And we already met last time off here ago. Hi, John. My name is Ondar. And I work as an SEO consultant in agency H1CZ. And I like SEO as a part of marketing mix, not especially the technical things, but I produce on setting SEO in the marketing mix. Hi, John. My name is Maria, and I'm a SEO specialist in agency H1CZ. So hi, John. My name is Pavel Unger, and I work as a SEO specialist at the agency H1CZ. I think I'm thanking you for this opportunity to be able to speak with you with pleasure. Pleasure for me to meet you. Great. And we have also two guys on the Hangout. Martin would like to say a word. Now they're probably trying to be as invisible as possible. Then we'll probably, John, ask you if you can now take the floor and use the question that we provided, and we'll probably come up with some other questions while in the discussion. All right. Fantastic. OK, so a bunch of questions were submitted already. Let me just refresh the list so that I have the right order and the right questions. As always, I'll try to go through most of these if you have any questions or comments in between. Feel free to speak up if you're in the room, or if you're just joining the Hangout directly. Feel free to jump in. And I'll try to leave some room towards the end for any additional questions that might come up. OK, the first question I have here is, can we expect some rapid increase of featured snippets in the check search results? We see them more and more, but mainly for definition type queries. In the US, it's quite common to also see them for other types of queries. I don't have anything to announce, so I can't promise that that will show more of these. I do know it's something that the team is working on to kind of spread more awareness around the featured snippets and finding ways to bring them into the search results so that they provide value is sometimes not quite that easy, especially when you don't understand the language fully yourself. You have to rely on a lot of people's help to get them there. So I would assume we'll see more, but I don't have any promises that I can make there. How do you count crawl budget for pages rendered on your site? Does JavaScript rendering in Googlebot count all of them embedded files? Or do only the HTML pages count? Yes, it counts per requests that we make to the server. So if we have to get lots of images, if we have to get lots of JavaScript or CSS files, all of those are additional requests that we kind of have to do for your server. So they all play a role in there. One thing that makes us less of a problem than it feels at first is that we do a lot of aggressive caching on our side. So if we've seen the JavaScript file before and we assume it hasn't changed, then we're not going to fetch it again. So we'll use the cached version for a while and rely on that. Or we'll use the cached CSS or the cached images and reuse those. So it's not the case that we have to do a full page rendering of everything on the page every time we crawl a page. It's really, for the most part, the case that we crawl the HTML. We can reuse almost everything else and just render the page with that. That also means that if you make significant changes in your JavaScript or your CSS files, then it helps to let us know about that. So a simple way to do that is to use versioning in the URLs. What commonly happens is you put a question mark and then you add a timestamp equals and then the date of the change that you made for the JavaScript file so that when we try to render that page, we'll see that the URL changed. It has a new timestamp on it because a JavaScript file changed. And then we know that we need to get that JavaScript file fresh from the server. So that can kind of help us there. For the most part, I think people almost put too much emphasis into crawl budget. For almost all websites, it's not a problem. It's not something that they have to worry about. In the case that the same content is located on HTTP and HTTPS, redirect is missing for some reason. Do any incoming links count for each URL separately? For the most part, what happens here is if we discover the HTTP and the HTTPS version of a URL, when we crawl those pages, we see exactly the same content. And when we see exactly the same content, we can fold those together. And we will treat them as one URL. So when we treat them as one URL, then all of the links count for that one URL. We'll pick one of these and say, this is the primary URL or the canonical URL for this piece of content. And all of the links will count for that canonical URL. So that's not the case that you dilute things by having both of these versions available and no redirect set up. That said, I would recommend still setting up a redirect or using rel canonicals so that things work the way that they should, that we pick the URL that you want to have chosen, not the one that we think might be the right one. Because if you pick the one that you want to have chosen, then that means you know in Search Console where to look for the data. Whereas if we pick HTTPS and you look at HTTP in Search Console, then you don't see the data for the HTTPS URL. So that's one thing where I kind of recommend setting up a redirect or reusing rel canonical or all of that together just so that you have the data in the place where you're actually looking for the data. I saw many times that Google ignored the meta description and created their own description in search results. What could be wrong? Why is Google doing that? Is the meta description important for Google? So this is something that's sometimes difficult because we do try to use the meta description as much as we can. But we also adjust the snippet based on what the user is searching for. So if the user is searching for something that you don't mention in your meta description, then maybe we'll take something from the content. Or if the user is searching for something that is in your meta description, then we'll try to use that from the meta description. So that's one thing to watch out for, that you don't take one search results page and assume that all other search results pages for this URL that you're providing look the same. So what I usually do is look in Search Console, which queries have the higher number of impressions, and try those queries out yourself and see what actually comes back. And sometimes you'll see that we picked something from the content because the meta description just doesn't mention that at all. And it's unclear for the user how this page is relevant for what they were searching for. So in cases like that, I would consider maybe expanding the meta description so that you actually cover all of this to make sure that users really know what your pages are about. There's no limit for the meta description, so no minimum or maximum. A minimum is mostly given from the screen space that the user has available. So on a phone, there's less space. On a desktop, they have more space. So we'll use more or less from the meta description. The meta description isn't a ranking factor. So it's not that you have to stuff keywords in there or anything crazy like that. It's really just something that we use to show a snippet in the search results. Is it OK for Google to use pagination, combination, rel next and previous and index follow on the first page of pagination and no index follow from the second page? So yes, you can do this. I think with rel next and previous, it doesn't really make sense if you link to no index pages because we don't use that content in search. So if it's a no index page, then we won't show it in the search results. So it's not something where I'd say you're getting additional value from using rel next and previous if the other pages are all no index. So that might be one thing to think about. In practice, it doesn't harm. It doesn't cause any problems. So if you have rel next and rel previous and the other ones are no index, that's kind of the way it is. We have to deal with that. With regards to no index for the second page and the following pages, one thing to kind of keep in mind is we won't be able to show those pages in the search results then. So if those pages have been getting impressions in the past, then they won't be getting that in the future. Maybe you're OK with that. Maybe you have other pages that kind of slide up in their place. That's something that kind of depends on how you want to set it up. So it's fine to use this combination. I think that the optimal setup really depends on the website itself. And there is no simple one size fits all websites for pagination. Sometimes that's really tricky to set up well. Our values of links and dropdown menus decreased. No, we find those links. I mean, these are usually internal links within a website. We crawl those links. We treat them the same as any other internal links on the website. The important thing with dropdown menu links is that they're actually in the HTML when the page loads. So that's kind of the more important part. It's really rare, but I've seen it happen. Sometimes you click on the menu on top and then it does a JavaScript request in the background to load the menu items. And for Google, the hard part is we don't know to click on here so that the JavaScript does its work. We essentially look at the page as we find it when you load it in the browser. So if these links are there, then that's essentially a good thing to have. Are site-wide links still bad for Google ranking? Do they harm the link profile of a website? Is it necessary to disavow them? So I assume these are site-wide links from another website. From my point of view, if these are natural links from another website, that's fine. It's not the case that site-wide or not site-wide has any big difference there. It's more a matter, are these natural links or are these links that are there because you have a deal with your trading links or something like that or you're buying advertising on this website and they're not using the no follow. Those are kind of the factors that we take into account. Whether or not it's site-wide or not is irrelevant for the most part. Sometimes site-wide links are something that kind of stand out when you look at the links to a website. So that's something you might just see them more visibly, especially in Search Console. You'll see like 100,000 links from this website, but actually it's just one site-wide link. Do relative internal links instead of absolute links have any negative impact? No, not at all. So you can use relative internal links. You can use absolute links. That's completely up to you. Sometimes for development purposes, relative links make it a little bit easier to test than to try things out. Sometimes a CMS automatically generates absolute links and both of those are perfectly fine. They can also be used in Atreflang. They can be used for pagination. They can be used for rel canonical as well. So for canonical, I wouldn't recommend using relative links because it's easy to get them wrong. And you don't see that immediately when you open the page. But for everything else, essentially you click on the link. If the link works, then that's OK for us. It doesn't matter if it's relative or absolute link. Why does Google sometimes choose feature snippets with terrible content? I don't know. So this is a copy that I get every now and then. Not so much featured snippets, but it's like, why does Google do this really stupid thing? Or why does Google's algorithm not catch this spammy site? Or why does Google's algorithm show this site that is really terrible? And why doesn't it show my website, which is just fantastic? My point of view, I think the easy answer is, we're not done. It's not the case that web search is finished and everything is complete. And Google engineers just go to the office and play games all day. There's always work to be done. So there are always these bad cases that come up. They're cases where we get it really well right. They're cases where sometimes we get it really good in English and really terrible in German or really terrible in Czech. Sometimes it might be the other way around. So a lot of these things are situations where it helps us to get feedback. It helps us to understand what is working well, what is not working so well. And we submit a lot of feedback internally as well to the search, to the ranking teams to really help refine our whole systems. And there's always feedback to get it. There's always something that doesn't work well. Sometimes it's a tricky query where you search for something and the query doesn't really make sense. But Google gives you this really stupid answer. Sometimes there are things where you search for something normal and Google gives you a wrong answer. All of these things, they can happen. And we work to try to improve them. I think if we had everything fixed and everything were perfect, then we would probably still go to work and try to improve things even more. But there are lots of things that still need to be done. So just as an example, internally within Google, we have a special form that we can fill out where we say this search result is really bad. And just from Google employees, we get, I don't know, order of 20 to 50 of these submissions every day, where there are things where maybe within the family or friends, people have found a weird search result. And they think, for this query, Google provides me this really terrible result. And that's something that we should fix. And these are things that all go to different teams within Google. And some of them are used to try to make quick fixes. So if we think it's really important that we fix this one problem because it's really visible and it's really ugly and we really don't want to be associated with this, then sometimes we can fix them quickly. And a lot of times, we take this feedback and we just use it for the next generation of algorithms. So we will maybe take something where we say, oh, the dates in the search result are wrong or the featured snippets are bad because we don't take into account that in check, this means that and this means something different. These things can happen. And that's something where improving the algorithms makes more sense than to say, I will manually fix this one search result. Because manually fixing it takes a lot of time and there are lots and lots of pages out there and we don't have that many employees. So sometimes we do get things wrong. And for those cases, feedback is really helpful for us. But we can't promise that we'll manually fix all of the feedback that we get submitted. John, how we can help you is any possibility how to share through these results? Anything that you want. So in the search results, way on the bottom, there's a feedback link. That's something that is tracked. For featured snippets, for knowledge graph, there's also a feedback link right below the item. That also helps us. That's also tracked by the team. And for bigger issues where you think this is really embarrassing and wrong and shouldn't wait for someone to notice it in the feedback, you can always send it to me directly. You can send me a note on Twitter or on Google Plus and I can pass that on to the team. John, John, John. Ken, sorry. There's just one, John. I just need to be hurry. Just two things about your previous answers. Why are we asking you about the combination of real next prep and also no index follow? It's because we have a secondary search engine in Czech Republic called SESNAM. And it doesn't support real next prep. So that's the reason why we are asking you about this. And the second one, I forgot so you can comment. Yeah, I think that's very common in the case that other search engines or other web services, they use slightly different markup. And that's why also we try to be a little bit flexible with what we support and what we require. So that's something where sometimes you need to do it like this for Google and like that for another search engine or maybe for Facebook and for Twitter like this and for Google like this. And the combination is sometimes a bit weird. But from our point of view, we try to support whatever we find on the web. John, I'm curious about the thing that you mentioned. Is it OK to have special meta robots only for Googlebot? Sure. OK. Yeah. Not cloaking, it's OK. That's perfectly fine. What's important with the robots meta tag is the most restrictive one is the one that is used. So it's not the case like with the robots text file where we use the most specific one. But we use the most restrictive one. So if you have meta robots equals no index and then meta Googlebot equals index, then we will still use a no index. The different solution, when the user agent is Googlebot, then I mentioned something different than, yeah, is it OK to? I don't see any problem from our side for that. I think the main problem you would have there is to maintain that properly. So making sure that you have the rules set up right. Because if users see it like this, or one search engine sees it like this, and Googlebot sees it the other way around, and suddenly you mix those two, then you don't notice that very quickly. It's really hard to check. So that's where I see more problems there. You can also do it with JavaScript, if you want. But this is also something where you really should test to make sure that we pick it up properly, the way that you want to have it done. OK, let's see. The next question is an easy one. What do you think about broken link building? Is it natural in your eyes if a broken link is replaced by another one? OK, so maybe this isn't that easy. But from my point of view, this is something that's kind of between you and the other website, where if you provide the other website with information and they say, well, thanks to your information, we found that there were some broken links and we can fix those, then that's, from my point of view, that's fine. That's not something that I would see as being particularly problematic. It's kind of like reaching out to a website and they kind of add your link to the list of links that they already have. I think there is probably an amount of this that is kind of healthy in the sense that you're not spamming all kinds of websites and saying, hey, this other website is broken. You should fix that link. But essentially, I don't see it as being that problematic. Rich cards should be global now. We've implemented them in recipe sites about a month ago. It's implemented correctly. According to the tester, we don't see them in Search Console. What could be the problem? So I think this is probably tricky because I don't know which of these structured data types are available in which locations. It might be that we've rolled this out in some countries but not in all countries yet. So that's one thing where I'm not 100% sure about when it comes to recipes and when it comes to the rich cards specifically. The one thing to kind of watch out for with rich cards that require AMP is that you make sure to put the markup ideally on both versions of the page, so on the desktop canonical page as well as on the AMP page so that regardless of which one we pick up, we can understand that this markup is here. That's kind of one thing to watch out for. The other thing that I would do here is just search locally and see which ones are actually already shown and which ones are not shown yet. And maybe it's just the case that we haven't been showing them in the Czech Republic so far. So especially with the different types of structured data, sometimes that goes quickly. Sometimes that takes a bit of time. It kind of depends on what all is happening in general in the area with regards to how quickly we can just make this available globally or how kind of conscious we have to be with taking it step by step and getting some site signed up and saying, OK, let's make sure that we have a really good experience in the search results. So I don't really have a great answer here with regards to the recipe snippets in the Czech Republic. If you want to ping me on Twitter or on Google+, I can double check with the team and then see what the current status is there so that you're not stuck in a situation where you're waiting for us. But actually, we've noticed that your markup is wrong and that's why we're not showing it. I want to say that something is so in practice. Yeah. I don't know what the right answer is. So we're an American-based company. So sometimes when teams in the US say we are rolling out globally, that can mean they've reached Canada and maybe the UK. That's already kind of global, but not like the whole world global. So that's sometimes tricky. But especially with the rich snippets and the structured data, it can depend on the actual type. So it can be that rich cards themselves we show globally, but the recipe type of rich cards is such that we're holding that back and waiting until everything is perfect. So that's something where it's sometimes really hard for us even internally to know which feature is available in which countries, to kind of guide people to actually implement it more. OK, let's see. Question about the indexing of an iframe. Does Google index the content of an iframe? In my project, a blog aggregator, some articles in iframe outrank the original blogger articles. We have much more links to this page, but I think the title shouldn't be enough for position on the first page. So yes, since we render these pages, it can happen that we take the content of an iframe and show it as part of the page when we render that page and use that for indexing in search. It's something where the sites themselves can set, I believe, an HTTP header or meta tag on the page that prevents those pages from being iframed, which is something that they could do. Whether or not it's the right thing to do with regards to ranking to show the page that is iframing or the page that is the actual source, I don't know. It can happen either way. For the most part, I think we would try to show the original source. That's usually the cleanest one anyway. But I wouldn't see it as being per se wrong if we show the iframe version in search as well. Is there a brand relationship between different domains of an international company? In other words, would there be a benefit if an international company had a single domain with language and product mutations? Would there be any benefit, essentially, I guess, different country code versions or just one version of the website? I think, for the most part, this is more a marketing question than really a CO type question. Because if these pages exist on one website or if they exist on multiple websites, then for us, it's pretty much the same thing. Sometimes there are policy or legal reasons why a website or business might need to use local domains. Sometimes it's something that doesn't make so much sense. So especially if the content is exactly the same and you just have different CCTLDs and everything is showing the English version of the content, then I don't think it makes sense to really keep it separate. But sometimes there are good reasons to separate this out. I think from an SEO point of view, it's something where maybe some situations will fall towards the single domain version being ranking better. Some versions might fall towards the multiple domain versions being ranked better. Just because we can say clearly this is something that's more specific to individual countries or individual languages. But it's not the case that I would say always like this or always like that. Another one of those questions with regards, I guess, the quality of the search results. Do you think it's OK to have the search results all from one domain? So I didn't check the search results here. But yes, in general, it can be the case that all of the search results are from one domain. I don't think that's, by definition, something wrong. It's sometimes a bit weird. Sometimes it makes more sense. Sometimes it makes less sense. But it can happen that there are multiple results from the same domain in the same search results page. And in talking with different ranking people and user researchers, it's something where they see it similarly that sometimes it makes sense. Sometimes it makes sense to show more diversity in the search results. So just because the same domain is visible a bunch of times in the same query doesn't mean that something is broken or that something should change. So yeah, I'd have to take a look at this specific query to see if this is something I might want to pass on to their team to double check. But for the most part, it can happen. How does Google work with spider traps, like infinite pagination or massive number of URLs created by poorly designed eShop filters? Is there some kind of penalty? So there's no ranking penalty. That gets lost. But it does mean that we sometimes waste our time trying to crawl and index things that are irrelevant in the end. So this is something that I often see when I do site clinics at conferences is I'll see we crawl maybe 1,000 times the number of URLs that your site actually has. And if that's like a matter of 10 pages and 10,000 pages, maybe it doesn't matter, because we can still crawl all of that in one day. But if it's a matter of 100 that we have to crawl, then that can result in us kind of wasting our time crawling a lot of URLs that don't actually need to be indexed. And it's not so much that the site will rank worse, but we will notice changes in your content a lot later. So especially if you have news content, if you have new products or change products or products that are on sale, then it might happen that it takes us a long time for us to really notice that actually, oh, this one individual page back here is the one that is different that we need to crawl. So you're kind of wasting the crawling on a lot of things that are less important. So if you run across something like this when you're reviewing a site, if you run something like Screaming Frog over a website and it just gets stuck in millions and millions of URLs, then that's something where you might want to kind of figure out why it gets stuck there and to kind of block that path through the website. Yep. And I ask you for some reason. It's quite similar to this question. How to do best some kind of wired products in eShop? Some products or brands of the products are different only in color or in size. So it's one number. It's better to do one page where I list all colors and all numbers or create for everywhere for every size for every color, separate URL. But it means a lot of URL to crawl. But there are not many concentrates. Yeah, that's a really good question. And one, I get a lot. And one, I don't have a perfect answer for. So there are two things I think that you need to balance there. On the one hand, when you create separate pages for all of these variants, all of those pages have to gain value on their own. So they have to stand on their own, and they will be ranked on their own. So if you have different size shoes, then each shoe size, if it has a separate URL, will have to rank on its own, which means it has to gain value on its own. So that's something where you might say, well, it's easier if I just have one URL, and I can rank that one URL. And it's easier for me to rank that one URL because all of the value is concentrated in that one URL. On the other hand, if you have variations that are really unique, that are special, that people are explicitly searching for, then it can make sense to say, I have this one special version. So for instance, when you have different shoes, maybe the shoe size doesn't matter. But maybe you have a shop that says, I only have the larger shoes or extra wide and extra small. And this is what makes my products very unique. Then maybe you would create special pages for those specific versions. So it really depends on your shop, on the content that you have, on what people are explicitly looking for. So that's kind of where I would look at it. Kind of like if you had a catalog that you're printing out and giving to your potential customers, would you create a separate page for this product, or would you say this is just one of the different attributes that's listed on the bottom of the general page? All right. Let me run through some more of the questions that were submitted, and then we can get to more from the room, from the people who are joining. What about links in hidden content, such as read more, accordion, and tabs? Do you consider them the same as visible part of the page, or is there some devaluation, as in the case of hidden text? In general, if the content is not visible, then we'll try to treat that as such. So that kind of goes for the visible content specifically. A really common use case where we take that into account is the snippet that we show in the search results. So if someone is searching for something and it's within that hidden part of the text, then we'll probably not use it in the snippet. With regards to links, we generally use them like any other links on the page. So that's less an issue of not being able to crawl a website because this link is hidden away. We'll try to still follow that within the website normally. When do you place HTML after JavaScript rendering into cache? So the Google cache, if you do a cache query or if you click the cached link in the search results for a page, we only show the HTML version of the page. So that's something where sometimes if you use a JavaScript framework, you look at the cache page and you say, oh, it's empty. That's not the case that it's a problem or anything. It's really just that we show the HTML by policy. And that's the only thing that we show in the cached version. Sometimes some JavaScript can still run in the cached version, but that depends on the security settings on the way that you have set up the JavaScript and doesn't mean that this content is actually in the cache page. So you can look at the HTML view of a cached page and you'll see it's really just the HTML. I don't see that changing, at least not at the moment. It's possible that sometime later, maybe that will change, but not at the moment. Site links. Sometimes Google doesn't choose the right site links. What's up with that? How can we fix those? For us, site links has become something where we essentially use normal web ranking for them. So it's not like it was before that we have this special situation with this page and then these site links are always associated with it. It's really normal web ranking, and sometimes we just show them in the site link style. So if a page is really wrong and shouldn't be indexed in the normal search results, then you would use the noindex on that page as well, which sometimes is tricky when it comes to site links because it's not that you want to remove the page. You just don't want it to be linked to this other page. And at the moment, that's not something that you can specify. If you see situations where site links are really, really bad, I would definitely post in the Webmaster Help Forum so that we can pass that on to the team and to have them take a look at that as well. But there's no manual way for you to kind of say, this URL and this URL should never be shown together. When does Google plan to stop crawling escaped fragment URLs? I don't have any time frame on that. We've announced that we are going to stop doing that, so it's possible that the engineers kind of flip that switch at any given time. And our plan for when we do this is not that we would drop these URLs completely from search, but that we would use the hash bang URL and render that page instead of the escape fragment URL, instead of the content from the escape fragment URL. So these URLs would continue to work. They would continue to be indexable in search. We would just render them ourselves rather than using your rendered version. Do you plan on to report crawled but not indexed pages in Search Console? This is something that we've been looking into. So there have been some explorations, some betas that we've tried out with some sites to see how we can improve the information that we show specifically around the index content. Or we can say, we crawled this, but we didn't index it because of this. Or maybe it has a no index, or maybe it's just a duplicate page, or it has a rel canonical pointing somewhere else. These are all things that we're looking into finding ways to share that with webmasters. It's sometimes a bit tricky because some of these issues, technically you could say it's an error, or technically it's something that maybe the webmaster cares about, but maybe practically it's not really something that you need to care about. So for example, maybe you have a URL in your sitemap file that redirects to a different URL. And from our point of view, we would say, well, the sitemap file should have the final destination listed. And if you don't, then that's not so great. From a practical point of view, it still works. We can still follow that redirect. We can still index that page. It's not going to break your site. It's not going to drop your site in rankings if you do it like this. So if we show that to webmasters and say, this is a problem that you could fix, will that just cause panic? And they get focused on all of these details that don't really matter. Or can they understand that, yes, this is a problem, but it's not a critical problem. And maybe I'll look at it the next time I do a redesign of my sitemap setup and focus on it like that. So that balance is sometimes really hard. And I know externally it's hard to guess what Google thinks is important. So every time Google brings some warning or something similar into the search results or into Search Console, everyone panics and says, oh, my site will be removed from Search if I don't follow these guidelines. And for the most part, that's really not the case. Sometimes we do just want to show warnings as warnings and say, hey, you could do better here. But you could help out a little bit. Does Google ignore the so-called stop words? So stop words are words that are really common within text. So that could be something like the or and or anything in the search or anything in text, in normal text where you write it that is not critical to understanding the page. And for the most part, we do ignore those stop words. But sometimes they're just a normal part of the text and we kind of have to put them together. So if you search for something like to be or not to be, then those are just a handful of words. But that combination of words is really what makes that unique. And that combination is what we should be looking for in the search results. So not just like, are these individual words minus does any stop words in there on those pages? But actually, is this combination there? And that balance between finding individual words and ignoring stop words or in finding phrases is kind of what makes search tricky sometimes. So to some extent, we ignore them. We don't ignore them completely. Why are some pagination grouped marked by real next and previous still visible in the search results? If they're not no index, then we can show them. And especially if you do a site query, then we will show these URLs. And try to make them available if someone searches for them. So that's kind of by design. If robots.txt is blocked some days and Google will block all links from a website, yes, if your robots.txt file is blocked so that we can't access it, then we will assume that we can't crawl anything from the website. If the robots.txt file just doesn't exist and returns a 404, then we will assume that we can crawl everything. But if the robots.txt file itself is blocked, if it returns a 500 error, if it doesn't return anything at all because the server is not responding, then we will assume that something is broken maybe on the server and we'll stop crawling until we can kind of be sure of the robots.txt file. Is there any solution on how to redirect without passing PageRank? And so one thing that you can do is kind of have a bounce page on your website and block that page with robots.txt. That's something that sites have been doing since the early days, where you have maybe a bounce script somewhere on your site that just takes the URL as a parameter, redirects people to that URL, and that bounce script is blocked by robots.txt. That's something you can do. I would be careful that you don't make it open to all URLs because then those URLs might get abused by other people. For example, really common cases that we see is that they abuse for phishing, where someone wants to send people to a phishing page and instead they send them to a bounce page on your website that redirects to the phishing page and then it looks like your website is actually redirecting to a phishing page. So what you want to do is avoid to have this be an open redirect and instead maybe have some kind of validation behind there that handles this redirect and make sure only to do it for the proper URLs. And blocking that by robots.txt is an option. Noindex in robots.txt doesn't work. It's something we don't officially support. So I don't even know what the current status is, if that actually does anything. It's something that we have discussed, I don't know, maybe 10 years ago with different people, if that would make sense. The main problem we have with Noindex in robots.txt is that it's a really, really big gun that you point at your feet and you shoot with both eyes closed. And if you set it up wrong, then suddenly your whole website is disappeared from search. And it can affect the whole website really quickly. It's really hard to find. Whereas if you're just disallowing access to your website, we can still show that in the search results. We don't have the title and the snippet, but we could still show that in the search results. So this is something where we've been really cautious about making that available. And it's not something that, at the moment, we have any plans to support. So if it does something now, I would not assume that it will continue doing that in the future. It can change. It's not something that officially is supported. Let's see. Hope I clicked the right ones here. Would you recommend us to add a nofollow attribute to links from other language versions of our website? No, I will just link normally to your different language versions. Links between the same page on different language versions, they help us understand as well that this content is related and that it's different language content. So that's a good thing to have. How do I remove or influence organic site links after deleting the site links feature? We talked about this briefly before. But basically, you can't. You can use a noindex if you want a page not to be shown in the search results. But otherwise, it's something that we see as a part of the normal organic ranking. And finally, a question here about JavaScript indexing. I tested JavaScript indexing on my site, and it didn't work the way I expected. What could be the problem, essentially? So I don't know which URLs you were looking at. So that's one thing. It's hard for me to double check what was happening there. But one thing that we have set up in the last couple of months is a kind of a working group for JavaScript sites in search. So I would recommend checking that out. I think we posted about it maybe two or three months ago. And it's a private group that you can join where you can discuss these type of issues and kind of get feedback from other people who are using similar tools, similar frameworks to see what works well on search, what doesn't work so well on search. For the most part, we can index and render a lot of JavaScript types, but there's some things that are not supported. And sometimes it's hard to double check what works, what doesn't work. I'd recommend using the fetch and render tool in Search Console to try things out. And if it works there, it should essentially work for Search as well. But I would maybe take the details without you've noticed. And if those URLs are still live on your website, then post about those in the working group. And I can take a look from there. All right. Wow. I'll send you a message about the JavaScript rendering. OK. We started some minutes later. Would you give us five more minutes? Oh, definitely. Sure. Great. And how much time do you have? I don't know. OK, please. Let's start with a simple question of the first. And we'll see how it goes. So how many times do you have? Five minutes? I can do, I don't know. Maybe like 10, 15 minutes, something like that? Wonderful. OK, OK. Can you please tell us more about the higher self productivity in the last four days? I know Google may change almost every day, but this seems like something bigger. And do you have some more specific information about that? I don't have anything in particular around things that have been changing recently. So that's really hard to say. I know we do make changes all the time. The tricky part that sometimes throws people off is that a lot of these changes that we make when we look at things like quality or when we try to understand pages better, they affect some sites more than they affect other sites. So sometimes we'll see people in one language complain about changes. Sometimes we'll see affiliate sites complain about changes. It's really something that depends a lot on the site and on the changes that are happening. So I don't have anything specific to kind of mention. And do you think you will still discuss these algorithm updates on the Google site in the future? Because this year it was 25 updates and three major changes with really super high volatility. Specifically, it was 1st February, 6th February, and 8th March. The last one was called Fred, based on according to Gehling. Lex, you're going to Gehling, sorry. Yeah. I would bet we have a lot more changes that would better rolled out. So that's kind of what I meant. Some of these changes are more visible. Some of these changes are less visible, especially the SEO community. Sometimes it's surprising because the search ranking teams will contact us and say, hey, we're going to make this change in the search results. It's a really, really big change, and it's an important change for us. And nobody notices it in the SEO community. It's a big change. It affects a lot of search results. For some reason or another, nobody is tracking that specific variation, and they don't see it. So that's something where when we do have changes with things that are actionable for site owners, we will mention that. So if, for example, the mobile changes that we had, the mobile interstitial changes, if there's something that we can mention that is important for you to change to kind of react to, then we will try to call that out. If it's just a normal quality change, then there's nothing we can tell you. It's like, oh, that doesn't really help me as a webmaster, right? But a lot of these changes go in that direction where it's not you should do this technical change or you should change this line of HTML. It's kind of make better websites that work so that users are happy with them. And that's not something that we can really, I don't know, tell webmasters and say, today we launched a change that shows better websites in search. We'll do that again tomorrow. These things just keep happening. What I would do as a webmaster is also try to keep up with all of these changes in the sense that do constant AB testing on your website and really try to stay on top of everything that your users are looking for and want to find on a website so that your website is also constantly tweaking its way higher and higher so that when search makes changes, we know this is a really good website and we'll continue to show it really well in search because everything tells us this is a good result. I have a lot of questions. We wanted to give a chance to anybody else, but it has a stronger voice. Which one? OK. I have in my mind last few years one thing. I was disappointed when you closed the Outdoor Sheep concept because the concept Outdoor is equally important like the website whereas the content published is great from my point of view. After you closed this concept, your colleague Gary still recommended Outdoors to leave the same markup on their websites. After a few months, he dismissed this claim. But as we know, many of online marketing tools, and I think the same situation is in Google, are able to recognize by machine learning algorithms the Outdoor Box or Outdoor Delay. Details, sorry. So my question is, does Google detect or recognize the Outdoor Box inside the content of articles? And if it does, then do you process and work with some of these details? As far as I know, we don't use that at all. So we don't use the authorship markup anymore for quite some time. And as far as I know, we don't try to kind of figure out who the author might be for any specific page. It's possible that there might be some machine learning algorithms that are trying to understand which websites are making good content and might be kind of accidentally discovering something similar to authorship there. But definitely, the markup itself is not used at all anymore. I know it was still used after we stopped showing the authorship photo in Search for a while for things like the in-depth article features in Search. But I think we've stopped using it there for a couple of years now. So it's really, really, really gone. I agree it was interesting to see in the search results, but it was also one of those things where when you talk with the team that was working on this, they said that the people who were most likely to implement this authorship markup were those that we were most skeptical about their content. So it's kind of a tricky situation in that it was almost more worthwhile for us to see which sites weren't using authorship markup, because those are probably the ones that are creating good content on their own. And then it's like, why do we support this markup if we're going to use it in the opposite way? Talking about the Google Cloud authorship markup, and we are talking just about the recognition of authorship posts and author device. So do you process that outside of Google? And also, do you count author posts as a part of trustworthy content, for example? I don't think we use that at the moment. So as far as I know, I don't think we use anything like that. OK. I'm wondering how do you find out that some page should be JavaScript rendered? There is one case which is easy, that the page is empty, but in another situation, how do you find out that the page should be rendered? We actually try to render all pages that we crawl now. So it's something that we do on pretty much every page that we crawl. We try to render that page as well, and we use the rendered version of the content for our index. So it's not so much that we try to recognize whether or not to use JavaScript, but we just try to use it for every page. And there's some subtoll error. Kind of like you mentioned, if there's no content on the page, then we have to rely on the rendered version. And sometimes what also happens is we first index the unrendered version, and then in a second state, we actually render the page to pick up the rendered content. And that sometimes creates kind of like a small time lag in between. But for the most part, we do try to do this as quickly as possible. And we do try to do that for pretty much every URL that we index for a web search. We can found out which content or which URL is duplicated for Google and other. Is there any key for us? There's no magic tool that doesn't get as close as possible is to do an info query in search. So you do info, and then colon, and then the URL. And we will show you the URL that we think is the canonical one for this URL. So you can try that out. I don't know what would be a good example. Google is kind of a tricky domain to try it out on, but you could try it out for your website with or without dub dub dub and see which URL we think is the canonical one there. There's some subtle things where sometimes the info query is not perfect to show which one is canonical and which one is duplicate. But for the most part, it works really well. OK, I can ask you about the info operator. Sure. Great. When you recommend it to use info, is there any way how to automatically ask Google about info of, I don't know, about hundreds or 1,000 pages? No, not at the moment. So if you try to scrape Google like that, then that would be against our webmaster guidelines, our terms of service. I don't know what the plans are for the future. It might be that this is something that we can provide as an API to users, because I have been getting more and more questions around this for things like hreflang markup, where you have to link between the canonicals or if you want to double check if the URLs in your sitemap file got indexed or not, then that might be an option. But at the moment, I don't think there's any automated way to get this information. Maybe I have a question, or will Google ever open some official API, sorry, or I don't know. I know you have CSE, you have Custom Search Engine, but it doesn't return the same results like a normal Google search. So do you think about that? So I think that the tricky part is really that there's so much personalization happening that you can't really look at a generic search results page and say, this is the ranking that is the right ranking for my site in this query. So that's something where I would assume we won't be providing something like that. We might have some way for you to say, I care about these queries for my website and to get that information. You can get that through Search Console already with the Search Console APIs. But maybe there are other ways that you could get that information. But I don't see it happening that we created an API for people to query our search results for any random queries that they want. I know that's something that the engineering teams are always battling with as well, in that sometimes there's a really large percentage of the requests that come to our servers are from scripts and scrapers and random sites that are trying to track rankings or just spammers in general trying to get free content that they can copy and paste on their website. And this is a surprising amount of resources that we have to spend on just maintaining enough servers to give all of these scripts some answers. So I don't know if we would have an API like that any time soon. OK, John, is there anything that you would like to have as a call out for Check with Masters that they should do better or some initiative that you'd like to take upon something that's quite recent that you would like to have it solved in the project market? I'm not really aware of anything particular. So I think the whole mobile-first indexing is something to watch out for. I would keep an eye on that. One thing that we did notice not specifically for Check with Masters, but in general, is just titles and snippets that we see a lot of sites get them kind of bad. And that might be something where with relatively little effort, you can make a pretty big impact on your site and how it's shown in Search. So in particular, with the Open Directory project now gone, we rely completely on the titles that you have on your pages. And if those titles are bad, then we will have to make something up. And sometimes that's not so good. So making good titles, making good descriptions is something that feels very basic, but it's easy to test, easy to double check your sites for, and easy to fix. So I would totally look into that. And when people are trying to do the mobile organization, is there something that they fail at most, some something that they do like an over-opensation? Special speaking, the one they are trying to achieve some better speed on mobile. I think speed is always something you can focus on. So there's so many studies out there that say, if you go, I don't know, 100 or 200 milliseconds faster on your pages, then suddenly you have so much more conversions. That's something where even without any SEO effect, there's so much value in making a website really fast. So I would totally spend time on that. With mobile, in general, one thing we notice is some mobile versions of pages don't have the full content. So this is something that there have been fights and almost religious discussions among people creating websites that people on mobile don't want to have the full information. They want to have something really short and really brief, and they don't want all of the images. And other people are saying, well, people on mobile sometimes don't have a desktop with them. They don't always have a laptop along when they need to look something up, and they want to find all of the information. So especially with the mobile-first indexing, we will use the mobile version of your page for the index, for the desktop search results as well. So anything that you don't have on your mobile page will not be in the index afterwards. So that's something I would totally kind of focus on. If you're working with a client for the first time or if you're just generally wanting to check things out, use your mobile phone and try to do everything on the website that you would normally do that includes trying to buy something from the website. A lot of times, I'll see your website kind of work on mobile, but then when it comes to doing the checkout, then suddenly the cart doesn't work or the cart redirects to a desktop page where you have to zoom in and fill out 20 fields on a page to try to find the right place to put your phone number. And then you hit Submit, and it says, oh, there was an error. And then the form is empty again. You have to start over again. And on mobile, this is really deadly. It kills conversions. And when it comes to content being missing, that will be something that will be missing in search as well. OK, so I guess that our time has run out. So John, please, I'm really glad that you gave us so many open and really nice answers to such a huge amount of questions that we have here and on Google+. So thank you once again for having this opportunity. Thanks for having me. It was really good to be with you. Great, so thank you. And enjoy the rest of the day as well. Thanks. Bye, everyone. Bye, thank you.