 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a webmaster trends analyst here at Google in Switzerland. And part of what we do are these Office Hour Hangouts, together with webmasters and publishers, SEOs, who care about web search and are curious or have questions around all of that. All right, so we have a bunch of questions submitted. But if either of you want to get started with something for the Hangout, you're welcome to jump in now. Yeah, I have additional question about comments on blog posts. And I'm interested about this one, for example. I see some famous influencers. They pay a lot of attention to comments, to reply all comments, even if they don't need to reply a long sentence. And it helps to extend content. Or it helps, for example, I see two directions. For example, first step, you extend this content, and people spend more time on your website. And second one, if you reply to these comments, they will go back and treat your replies and stuff like this. How it helps? Because other websites like Searching, Journal, Land, they don't have any comments. And I know they had it, but they deleted it. I don't know why, perhaps they have some reasons. Can you explain this? So we do use comments as a part of the content on a page. So if the value of your page goes up because of the contents that are also there, the comments from other people, I think that's a good thing. The other aspect of people coming back to read your comments, I think that's good, too. And then there's a little bit, the aspect, also, of if you respond to comments, then it feels like you're building a community. And if people come back explicitly for your website, if they search for your website explicitly and say, I really want to see Anatoly's blog, then that's, for us, a sign like for these queries, we really need to show your blog. So we can pick that up as well. So I think all of that makes a difference, makes it a little bit easier for users to go directly to your website because they remember your name, they search for your name, and for a query where you're searching for your name, the only result that really makes sense is your website. So that's kind of almost like free traffic. And building a community means that people come back regularly, they help promote it for you. I think all of that plays in. It's not that it's a direct ranking factor that we would say, oh, it's like someone is searching for your blog. Therefore, this blog must be the most fantastic blog ever. But at least for those queries, that's the website that should be shown because someone is saying, I really want to go to this website. The other side, which I think some of the other blogs have seen as well, is that managing a comment section can take a lot of work. So it's not easy to stay on top of the comments, to kind of make sure that the comments are high quality, to weed out the spammy comments, to regularly go back for old posts and reply to comments from people. All of that takes a lot of work. So I can understand why some sites might choose to turn off comments. We did that for the Webmaster Central blog as well, where we noticed that essentially the blog comments weren't adding that much value. So we went through, I think, the whole year worth of comments. We exported them all and kind of analyzed them a little bit. And it seemed like some posts had some really good comments. But a lot of the posts were kind of irrelevant comments that weren't really adding that much value. So that's why we decided to turn those off. I think if you have the time and you have maybe people who can help you out with this or you have a system in place that helps you to kind of bubble up the higher quality comments, I think that's a good idea. I think that adds a lot of value. And especially if you can build a community around your content, then these are people who are explicitly coming back for your content. So even if your website disappears completely for generic terms, if something weird happens, then they will still search for your website explicitly and say, I really want to go to this guy's blog. I think that's a good thing. One more question about one time I thought from, I'm sorry if I don't pronounce correctly, Harry Ellis. He told interesting thing that comments are more, they help more to rank your website than even social traffic. Yeah, I'm interested with this one. It means social traffic and comments, all of them ranking factors or indirect factors. It's more indirect. So it's not that if you have comments and we will rank your website higher, but these comments have content in them as well. They have text in them. They have keywords in them. And if people are searching for something like that and we see it in the comments, we can show your page for that. So I think that's more around bad direction rather than just there's a magical factor. It's like if you have 17 comments on a blog post and it must be a good blog post, it might just mean that you're not paying attention to spam, for example. So just purely having a number of comments doesn't mean that it would rank higher. OK, thank you. Thank you a lot. OK, let's see what we have submitted. See how YouTube orders these comments. I would like to see the amount of voice search queries that a website had. Is there any possibility to see that? At the moment, there isn't anything specific around that. I think I talked about this at SMX this week as well, a bit with various people. I think one big aspect there is also that it's important to try to understand what exactly you're looking for when it comes to voice queries. So it's something where voice is very popular. It's in the news everywhere and people can go, oh, how much voice is actually happening. But there are lots of different ways that people can use voice to find a website. So the simplest approach, which probably is the most common one, is someone goes to Google and then clicks the microphone button on a phone and then says the keywords that they would normally use to search just speak some instead of typing them in. So to me, that's not really a voice query. I mean, it's done by voice, but essentially it's like a different type of keyboard that they're entering normal queries in. They see the normal results in Google as well. They can pick one of those results and navigate to that website. And I think, I mean, I don't know the numbers, but my feeling is this is pretty common. And this is something that from a website owner doesn't really change anything to you. It's like they're using a old laptop keyboard or they're swiping on their phone. It's a different kind of keyboard for the same kind of queries. So that's something that's already shown in Search Console. It's not shown separately because from our point of view, it's essentially like a normal search. The other thing that people sometimes throw into the same group is everything around the assistant devices. So if you have a Google Home or if you have an Amazon Alexa or an Echo or whatever all of these devices are called now, you can go to a lot of them and ask them a question. And sometimes, if it doesn't know the answer directly, it will say, according to this website, the answer is this. And that's something that matches a little bit more what I think the general kind of marketing vibe around voice is bringing out. But I assume that's still fairly rare. And that's also something where it's more like a featured snippet that you would see in the search results unless something where someone clicks on a result and goes to your website. So I believe in Google Home, they send a link to your phone. So you can click on that link, and then you can go to the website and you get the full context. But that, at least at the moment, is something that I don't think is counted anywhere specifically. What I would recommend doing there is, especially if you're unsure what kind of voice queries you think would make sense for your website, is to get one of these assistant devices and just try it out. So they're really cheap at the moment. You try to use them on a day-to-day basis so that you kind of understand where the limits are of the technology, where you think your site, your business, might be able to fit in. So think about what kind of things people might randomly want to ask an assistant or your content fits in. And then you could take it a next step and say, OK, I found these maybe 20 use cases for my website within an assistant device. Would it make sense to build a kind of a skill or an assistant app that specifically shows this kind of content or doesn't show it? It says this content to users. For example, at SNA, I was talking to someone who used it or is currently using it to provide a list of frequently asked questions and to give those to people who have specific questions. So you would install this app or this skill on your device. And then you can ask it the common questions that people would go to maybe a physical store for instead. And that way, you kind of have a way of really guiding users, bringing them the voice information. And then you can also track what people are doing with regards to your voice skill or your voice app or whatever they're called and your business. So that's kind of the direction I would go there. Instead of thinking, well, voice is this magical thing, try to split it up and try to think about which numbers you're already seeing and think about how you can maybe go into the direction of the more future-looking things where people are asking specifically your website. It's like, hey, I'd like to ask John's blog, what is the optimal title tag length? And maybe it can give you an answer. Or maybe you realize that actually people wouldn't be using voice for something like this. Instead, they'd like to maybe read a longer article where there's more than just one number that is returned. All right. Google doesn't rank thin websites. What's your opinion about a site or a blog with 40 posts that has quality content and well-optimized SEO? So on the one hand, these are two very different topics, kind of thin websites and a blog that has a lot of posts with high-quality content. I think there's nothing really bad wrong with a blog that has high-quality posts and good content. The thing that kind of worries me about this question is, on the one hand, you're saying, oh, thin content is bad, but I have 40 posts on my blog. Exactly. Is that thin content? And that, to me, sounds like you're trying to create a quantity of content rather than a quality of content. So it's not for us so much a matter of like, oh, there are 40 blog posts here. Therefore, it must be a good blog. But more like, well, for this page, what is the quality of the content here? Is this something that we want to recommend to other users? Or is this just one page that was created so that the blog and the end has 40 posts? So we don't have an algorithm that looks at a blog and says, oh, there are 40 or more blog posts here. Therefore, it must be high-quality. We look at this overall, and we think, well, there's a lot of great content here. And maybe it's on one page. Maybe it's on 100 pages. Maybe it's on exactly 40 pages. It's more a matter of the quality of the content rather than the quantity of the content or the quantity of the pages that are involved. So my recommendation there would be to think less on terms of words per page or pages per blog, but rather to think about what information do you have that users are interested in searching for and how can you provide it in a way that answers their needs and helps them to move forward. Ideally, in a way that maybe encourages them to come back and say, well, this was a really good blog. I'd like to stay up to date what they're doing and maybe searches for it explicitly. Links to it from other blogs where they're saying, well, this blog post really inspired me. I think other people should see it as well. So that's kind of the direction I would go. Don't count words or pages. And instead, focus on the content instead. OK, there has been a lot of cry about decreasing organic traffic due to multiple features like featured snippets. People also ask. And if searchers get their answers and the search results, they wouldn't click on any result. So what is the future of SEO or SEO agencies? I see the traffic and clicks from organic search going down. So there are lots of people talking about things like this. And some people have really strong opinions. I think from my point of view, these are kind of natural progressions that are happening in that more and more the web ecosystem is growing up. There are lots of really great websites out there. But it's less a matter of all of these websites or the value of all of these websites for businesses is in the page views that they get. So for example, if you're a small business, then what matters to you is not how many page views your website gets, but more how much traffic you're actually driving to your business, how people are buying things from your website, or buying things from your website, or maybe buying things in person in your stores. So a lot of these features, I think, help users get to that content a little bit faster and to be able to make those decisions and to go there and say, well, I'm looking for the opening hours for this business. Then they want to go to that business. They don't want to go to that website. So that's where I see a lot of this moving towards. And I think the nice part about this is these are normal progressions on the web. Things are changing. Things are kind of moving forward. And from an SEO point of view, it's like a lot of the work that you do still makes sense. I don't see pretty much any of the SEO aspects really going away. Maybe shifting away from how many blog posts do I have to put on my blog towards how can I create content that attracts users so that they come and visit my website, that they sign up for my services, that they sign up for my newsletter, that they come and visit my business in person. All of these things, I think, have always been kind of the ultimate goal. And it's getting a little bit closer together. So my recommendation there would be less to focus purely on page views, but more to focus on what is it that you really want people to do on your website within your business. And a lot of that ties back to the traditional SEO things where you have to make sure that your content is visible in a way that it works well for users and that it works well for search engines. You have to make sure that the content is crawlable and indexable in reasonable ways. And none of that is going away. And also all of the online marketing aspects that are kind of involved with SEO, where you're working to promote your content, to make sure that it matches what people are actually searching for, none of that is going away either. So from my point of view, things are evolving. It's no longer just 10 blue links in the search results. But we've seen much bigger steps, kind of the step from offline to online. That was a really big one. And that's something that a lot of the SEOs kind of jumped on. And it's like, this is a cool new challenge. I really love this mix of technical and kind of marketing and non-technical things that I need to focus on. And similarly, things will be changing over time as well, where maybe it's even closer connected, where you have to figure out ways to really highlight the value of your content in search so that people come to your business, that they come to your website, and do whatever you'd like them to do on your website. So kind of back to the original question, I really don't see SEO just kind of disappearing magically, because some black box has somehow automatically figured out that people with this desire should be connected with this business without knowing anything about the rest of the web. So I don't see that kind of magical connection just happening without a lot of work in the background to make sure that all of these small steps in between can be connected as well. Oh, boy. OK. So somehow, everyone disappeared. OK. Not really sure of what is happening here, but it looks like it's still going. So I'll just continue until maybe other people jump back in. OK. This is awkward. So the tweet is about, I think, popular products, Carousel, that was shown. One of the things there is that a lot of these more modern, more modern, newer search features don't appear in Switzerland. So I don't always see exactly what you're seeing, even if I try to reproduce that query. So that's a bit tricky for me to kind of say exactly what is happening there. However, what we did do, what we announced, I think in February is a way for people to use product markup on their web pages, especially for e-commerce sites, without going through the kind of the commercial product search functionality. So for that, I'm just looking at the blog post that we did in February. On the one hand, you add the markup to your pages. You make sure in Search Console that this markup is available. Woo-hoo, Nihai is here. Someone made it back. So you put the markup on your pages, and you make sure that we can pick up the product markup. And then you can also submit feeds to the Merchant Center. So you can submit your normal product feeds there, and they could be eligible to be shown in Search Results. So my guess is that what you're seeing there is kind of a mix from there, and that people are putting product markup on their pages, and we're picking that up. Maybe submitting a product feed through the Merchant Center, which you can do without kind of running Google ads. So that might be what you're seeing there. I recommend checking out the blog post from February, which is titled Help Customers Discover Your Products on Google and seeing if that's something that might make sense for you. Again, with regards to the locations where this data is shown, I don't know. It might be that it's limited to industry, but at the moment, those kind of things. I'll watch out for that. OK, cool. People are coming back. No idea what happened there, but all right. Let's grab the next question. So we're a large company of brands focused on unique online services. And in one vertical, we're targeting the same language countries with different TLDs. And I think it kind of goes into the different variations that can be used there. Would an algorithm be able to recognize each piece of content as targeted towards specific country, despite the similarity of the content, instead of treating it as duplicates? So if this is exactly the same content across different country versions, we would see it as duplicate. If you have the hreflang annotations for those pages, we would still be able to swap out the URLs on a per country basis. So on the one hand, we would probably fold them together and pick one as a canonical. On the other hand, we would split that back up when showing it in the search results with the appropriate URL. So that can be a little bit confusing to webmasters in the beginning, because we say we index just one version, but we show all three of these versions. And essentially, that's normal. So that's not something where I would say there's anything kind of problematic happening there. It's not that your website would get demoted because of the duplicate content. It's essentially just folded together and then expanded again as people search. So the one thing I would watch out for here is that you really use the hreflang annotations so that we can guide people to the right versions. Let's see, if a user has been to a website before, does that make it more likely that you will show them the same website in search, even if they've never been to the specific web page, you're showing them in search? I don't think so. I don't think we'd have anything specific where we'd say, this person has been to this website, but not this page. So we should show the website even more visibly in the search results. I don't think we'd have anything like that. It might be that there is an effect from individual pages, where if you've been to exactly that page before, that we would show that a little bit more prominently in search as well, or say, well, you visited this page last week. Maybe it's so relevant for this query. I'm not sure if that's something we have with regards to personalization, but we definitely wouldn't have anything where we'd say, you visited something else on this website. Therefore, this one page will be the one that we would show. Does Google read text that is on images and use that for ranking in image search? Or is it only the alt text that counts for images? We do have a lot of fancy machine learning things that try to recognize what an image is about. But as far as I know, that's not something that would be a primary ranking factor for images. It's something that's fun to play around with. So I believe on the Google Cloud Developer Console, you can set it up so that you can submit images and see how they work. But as far as I know, there's nothing that we would map that directly to image search. So my recommendation there would be to do it the traditional way and say, alt text, when you're embedding images, captions, below the image, content around the image on the page, all of that really helps us to better understand what an image is about. And it's really a quite direct signal where you're saying, well, this image is really about that. Where if we had to go and look at images and think, is this a bagel or is this a dog? And we say, it looks like a bagel to us, then I don't think that would be sufficiently useful for anyone who's searching in Google images. So it's really useful to have kind of that direct context instead. I have a question about the inspect URL tool. If I clicked on View Crawl page and see the HTML, what actually do I see? Is that the original source code or is the part that's rendered by Googlebot? If I click on the live test and choose Screen Shots, do I only see the elements which have been seen and rendered by Googlebot? For example, if the page is a JavaScript page, using the inspection URL tool, if this box is available, or can I use that tool to check if it's available for Google, or can I only use a cache page? So there's a lot to unpack there. Let's see. So I think, first of all, the last one you mentioned, the cache page is something that is really just the cached HTML that we received when we tried to crawl and index that page. So that wouldn't reflect what we would actually use for indexing, because it wouldn't include things like JavaScript when it's executed and changes things on a page. So the cached page in the search results is really only the static HTML. The inspect URL tool, if you use the View Crawl page, then you would see the rendered version that we use for indexing. Sometimes you also see the static version there when we haven't had a chance to go off and render that page yet. So essentially, that's the version that we currently use for indexing. We don't always have the HTML visible there so that you can see exactly what was shown. Usually, that's more kind of like a technical quirk on our side when that's not shown. It's not a sign that anything is broken on your side. If you want to see how we would theoretically render a page, then you can use the live test in the inspect URL tool to, on the one hand, have the page rendered with Googlebot with the screenshot, I think. On the other hand, to also see the HTML and to see the JavaScript console, to see any errors that show up when we try to render that page. That's really useful if you're working on your web pages and you're not sure if Google is able to kind of process the JavaScript on a page, then you can use the inspect URL tool to double check to see what Google would be able to pick up there. All right. Wow, I have no idea what is happening with people coming and leaving. If something is broken, I guess I'll find out afterwards. It looks like WordPress, or maybe it's a plug-in. Automatically puts image URLs and robots text. Would that hurt us for image search, or is it enough that the images are on a specific page that is not blocked by robots? So if image files are blocked by robots text, then we would not be able to use them for image search, so for Google images. We really need to be able to crawl the image files directly to be able to use them. For images, we also need the landing page. So we need both the landing page and the image URL to be crawlable and indexable. And if that's the case, then we can show it in the search results. If the image URL is blocked by robots text, then that wouldn't affect the performance of the page in web search, but it would prevent that image from showing up in image search. The one thing to kind of keep in mind is that not all images need to be indexed. So for example, if you have a theme on your website and it has lots of graphical elements that are more to kind of decorate the page that provide user interface, like little buttons or maybe graphical elements that just kind of add a little bit of value there, then those could be images. But those don't really need to be indexed for Google images because nobody's going to search for, I don't know, the Submit button or the Start Broadcast button in YouTube. Those are all things that essentially people don't really need to search for directly, so you don't need to have all images indexed. I would focus more on the images that are actually useful for your website so that you can make sure that those are the ones that are indexed. And another thing that I've seen, a lot of people kind of fall into the trap of is it's easy to think about Google images as kind of a JPEG search and to think, well, I need to get all of my big images into Google images. But instead, what I think is more valuable is to think about Google images as kind of visually searching for your website. So instead of just saying these keywords are associated with this image, think about what the user journey might be for someone who's visually searching for something that your website offers. So for example, if you have luggage on your website, then maybe you don't just use something like, I don't know, this size suitcase as the alt text, but you describe it in a way that people might want to use visual search to find it. Maybe it's like a red suitcase with rounded corners, something like that. So instead of thinking about Google images as a way to just dump all of your images in there, and you'll magically get some visitors from there, and think about how users might search visually, and then make sure that you're providing content in a way that matches that user intent so that you're in the right place at the right time when people are searching visually for something similar to your content. So with that in mind, there are probably lots of places where people wouldn't search, usually, where you don't need to put all this content into Google images. And there are other places where it really makes sense to think more about the user side than to think about just like the keywords that you're trying to push. So that would be my general recommendation there with regards to images. Do jump links help a piece of content's SEO, and if so, how? No, not necessarily. So jump links are links where you're linking from one part of the page to a different part of the page, where it just scrolls down to that part. That's more, I guess, the usability thing that helps users to find the right part of your page. Of course, if you make a great website, then indirectly, that could be reflected in search. But there is no kind of magical factor that picks up jump links and associates them with search. Two questions. When using dynamic rendering, the responses to Googlebot take five to 10 seconds. Is that OK, or will it be penalized as a slow page? The response to humans are fast because they're rendered on the client side mostly. Yes and no. I think there are two aspects there. On the one hand, if it takes 10 seconds for you to render a page to show it to Googlebot, my feeling is that for users who probably don't have access to your large server and infrastructure, it'll take even longer to render the page. So if it's 10 seconds on your side when you're processing it directly on the server, maybe it takes 15, 20 seconds for a user, and that's kind of slow. So that's still worth keeping in mind. On the other hand, if it takes five to 10 seconds for individual URLs to be rendered on your side, then what will generally happen on our side is we will see this as your server taking five to 10 seconds to serve HTML files to us, which to us means that probably we shouldn't be crawling that server so much because we don't want to overload your server. So instead of maybe 50,000 pages a day that we would crawl from your website, maybe we would just crawl 5,000 pages a day because we don't want to overload your server. So my recommendation there would be more to find ways to generally decrease this time so that instead of five to 10 seconds to render all HTML pages, we can, on average, get things a lot faster, similar to how we would be able to fetch normal HTML pages directly. I assume it'll be really hard to get down to like those hundreds of milliseconds for normal HTML content, but there's a lot you can probably do with caching to really make sure that we can get responses for the pages that we crawl the most in a really quick way so that we don't end up getting stuck in this trap of, well, we can't really crawl that much of your website because we are afraid of causing problems on your website. On the other hand, if you don't have a lot of pages on your website, then crawling a lot doesn't really matter so much. So that might be something where you could say, well, I don't really mind if Googlebot only crawls 5,000 pages a day. I only have 1,000 pages on my whole website, so that's perfectly fine. And in a case like that, I wouldn't really worry about it. But if you do have a lot of content on your website and you want Googlebot to crawl all of it, then you really need to make sure that those pre-rendered responses are as fast as possible. What's the best way to let Google know that a page has been updated? Say a page from a week ago suddenly has a time critical update, we need to let Google know about it. This has to be automated and scalable to many pages a day. OK, so that last part changes some things. So one thing that you could do if this is a one-off thing is to use the inspect URL tool and to submit it to Google from there. But that takes manual work. That's something you'd need to do on a per URL basis. So for individual pages, that might make sense. For everything else, you'd probably want to do it in a more automated way. And the right approach there is generally to use a sitemap file. So in a sitemap file, you can specify the last modification date. And with that, you can also ping the sitemap file to us and tell us something in the sitemap file has just changed. We'll usually go off and fetch that sitemap file right away. We'll see the new pages with a new last modification date, and then we can go off and crawl those pages as quickly as possible. So that's usually the approach I would take there. The important part is that we can trust your last modification date so that it's not the case that every time we open the sitemap file, your whole website has just changed magically in the last five seconds. But rather, we see those five pages have just changed. So we can go off and crawl those five pages. I could use some help improving working for a directory website. If you aren't a well-known directory, like Yelp or Google, with users submitting lots of information on their own, it can be difficult to compete for local and 10 searches. I can imagine that that's difficult. I suspect, in general, these type of sites where you're kind of collecting public information and reprocessing on your website are probably tricky to work with. And I can imagine it's something that will be harder over time. Because thinking of it as a user, if you're looking for the opening hours, for example, of a local business, then you want to have that information as quickly as possible, or maybe you want to go directly to that local business website, you wouldn't go to kind of this intermediary in between. However, if you do have a lot of unique content and unique value that's not available anywhere else, then that kind of changes things a little bit in that suddenly the content that you have on your website becomes valuable. And people think explicitly for that content rather than for this kind of generic content that you also have. So my recommendation there would be really to find ways that you can use or get help from users or where you can pick up other things to really make sure that you have something unique and compelling on your website that people are explicitly looking for. And that could be additional information. That could be reviews like some of these other websites have. That could be compilations of local businesses in ways that is not available anywhere else, in ways that people are explicitly looking for. So all of these things are kind of approaches that you could take here. I think the important part is really that you don't go off and just say, well, I have a script that can pick up all of these different combinations across the web from existing public sources. Therefore, I will just combine it all, and that will be my unique value. If you're just combining things that are already out there, then that's kind of hard for us to say, well, this is an important page that we should show on its own rather than sending people directly to the source of that information. Not really sure what's happening with people popping in and out. Seems like something is weird. Something weird is happening. Exciting times. Let's see. Is it true that you completely ignore text, which is copied from somewhere else, like a quote to understand the context or the content of a certain page? I don't think that would be true. So there are multiple levels kind of of understanding duplicate content. On the one hand, if something is exactly the same as another page, then we would see that fairly quickly and say, well, this is exactly the same page. We should pick one of these pages as a canonical, and then we'll ignore the other one. On the other hand, if there are certain blocks of text which are copied across multiple pages, then we try to figure out, well, first of all, we would index all of these different variations. And then when it comes to showing the search results, we would try to figure out which of these pages is the most relevant for users and try to kind of fold away the other pages into that one page. So for example, if you have a website that is selling shoes and you have all of the different color shoes on different URLs, but essentially the description of those shoes is exactly the same across all of those variations, then we might go off and index all of those different pages. But if someone is searching for something from the description somewhere, then we would just pick one of those pages and show that in the search results, which kind of makes sense. It's not that we would ignore the text on the other pages. It's just, well, showing it multiple times that exact piece of text is not really going to provide value to the user. On the other hand, if someone is searching for some combination where they say, well, this type of shoe in blue and we have a page that says the shoe is in blue, then we would pick that combination. So essentially what we're trying to do there is reduce the amount of duplication that we show in the search results and instead try to bring the relevant results in the search results for those users. So that's kind of what is happening there. It's not that we completely ignore that text across all of those other variations of pages. It's more that we try to just show it once. Why show the same piece of text multiple times to users? If they've seen it once, they've seen it once. So we pick one of those to show. URL link from was a good feature in the old Search Console. And in the new one, that's not available. How can we fix external links that are going to our broken pages? I don't think any external tool is as effective as Google Search Console. So I don't know about third-party tools. I've seen some really cool third-party tools. I wouldn't discount that completely. But that is a useful feedback. What I would recommend doing is making sure that you submit that feedback directly in the new Search Console so that the team knows about this. We do pick things up from the forums, from Twitter, from these Hangouts, from events that we go to, and we bring that back to the team. But feedback directly in Search Console is really, really valuable for us. So if there's something like this where you're saying, well, this throws me off completely, I can't fix my website in a reasonable way without having this information, then let us know about that so that we can improve the new Search Console. I don't think we would go back and say, oh, we will just re-enable the feature in the old Search Console. But instead, what we could do is find ways to provide that information in the new Search Console. So go off and give us feedback so that we know what you really, really want. Unavailable after tag, is that still in use? Or is that also deprecated? Last I checked, that was still in use. I think that's also fine. I think it's tricky sometimes to use that properly. But it is something that we still use. I have seen it be processed as well, so it's not gone. You said Google's algorithm doesn't automatically favor the home page ranking above other pages. What should we do to let Google know that a blog post, for example, should be ranking for a certain page term rather than the home page? If we have a small website, how do we present clear signals to show Google that this blog post is the better page for certain search terms, even though the home page probably has most internal links pointing to it? So the best thing that you can do in a case like this is to make sure that you really have that content covered well on those blog posts. And maybe have it a little bit clearer on the home page that this page is not about that content. So you mentioned internal linking. That's really important. The context we pick up from internal linking is really important to us as well. So with that kind of the anchor text, that text around the links that you're giving to those blog posts within your content, that's really important for us. Additionally, of course, the content, like I mentioned, is really important. So making sure you have clear titles on those pages. You use clear headings. You structure content in a way that's easily readable. That's in a way that is really clear that this is about this topic without resorting to keyword stuffing. So keyword stuffing is that one thing where sometimes I see people saying, well, if I mentioned my keyword twice on this page, then surely, if I mentioned that 500 times, it'll be even better. And in practice, what happens with our algorithms, when we look at that page and we see this keyword is mentioned 500 times, and we're like, probably this page is not really so relevant for that keyword because they're trying to artificially promoted so much with those keywords that maybe we should just ignore it completely for those keywords. So with that, my recommendation would be to be reasonable about putting keywords on your pages, write your pages in a way that they would work well for users rather than in a way that you think search engines might pick that up. So the old school keyword stuffing where you just include those keywords as many times as possible in all variations is something that search engines have seen a lot of times over the last, I don't know, 20 years or so that they've been around. And they're generally pretty good at ignoring all of that in the meantime. So instead of just stuffing those keywords in there, make sure that you're writing something reasonable in a way that includes what you want to this page to run for, but in a way that also doesn't go completely overboard. So again, back to your problem, making sure the home page is good is definitely a good thing. Making sure the home page is clearly not just primarily about those keywords is important. Making sure you have clear internal linking with clear anchor text to the pages that you think are important helps. And on the pages that you think are important, make sure that you at least bring those keywords in. But at the same time, don't go overboard with regards to keyword stuffing. And of course, keyword stuffing doesn't just apply to log posts. It applies to any kind of content that you might have on a website. Let's see. Is it normal for Google to crawl extensively JavaScript when the site is server-side rendering and the whole content is already HTML? What's the best solution to help Googlebot understand that all the content is already in the HTML except for blocking JavaScript crawling from the robots text, which is very aggressive. So I wouldn't block JavaScript from the robots text because it does help us to be able to render pages properly. My feeling here is that if you're using server-side rendering and we're still processing the JavaScript on your pages, then it feels like that server-side rendering is not done in an optimal way in the sense that we still find the JavaScript and we still execute it on the page. So usually the kind of dynamic rendering that I've seen is what happens there is the server takes the content with the JavaScript, processes all of the JavaScript, and in the end, the page doesn't really have any JavaScript left on it because it's already all processed. So obviously, there will be some amount of JavaScript still left, maybe JSON-LD, structured data markup on the page. But for the most part, the pre-rendered page would be such that that can be viewed in a browser by Googlebot, for example, without having to process the JavaScript. So if we're still going off and processing all of the JavaScript on your pages, despite you using server-side rendering, then that, to me, feels like probably something with the server-side rendering isn't set up optimally. I definitely wouldn't go off and block the JavaScript from crawling there because sometimes it is useful for us to be able to process the JavaScript, especially if it's still on a page. The other thing that I would look into here is to think about ways that you can use caching a little bit more. So this is independent of server-side rendering or not, but especially if you have JavaScript files, you have CSS, if you have any other files that you're embedding in these pages, try to think about ways that these resources can be cached properly. So in particular, try to avoid putting session IDs or similar into the URLs. Try to avoid the situation where the URLs change on a daily basis without the content changing so that we can actually cache that JavaScript file for maybe a couple of days, maybe even for weeks. And kind of use the cached version will be render any page. So that's something I would look into there to see what you could do to maybe reduce the number of URLs that are generated on the website because it shouldn't be the case that we'd have to refresh the same JavaScript file over and over again like you're mentioning 60% of your total crawl budget, refreshing the same JavaScript files. That sounds more like we're finding a lot of different JavaScript files, and we're trying to pick those up and use those. Site links for authoritative sites are creating significant data flow in Search Console. I mean, impressions and clicks are countered the same way for both site links and normal URLs. So if you want to see click-through rate and position by query, site links are changing that number on large scale. Do you have any fixes planned? I don't think we would have any fixes planned there because, for the most part, we see site links as normal search results. So they're shown in a slightly compacted way, but essentially they're normal search results. And it's normal to have, sometimes, to have multiple URLs from the same site shown in the same search results page. So this is why when you look at it on a per site level or on a per query level, we show you the average top position in Search Console, which means if there are three or four or whatever URLs that are shown from your website for a specific query, we count the top position for that. And for clicks and impressions, we also focus on a per site level there. So that's something where if multiple URLs from the website are shown for the same query, we would count that as one impression because the site was shown in the search results there. Even if there are multiple results that are shown within that query. So from our point of view, we wouldn't necessarily see that as being a flaw. Obviously, if you look at it on a per URL basis, then it does get more confusing because then you would, perhaps, have multiple URLs from your website, each get one impression. And if you add up those five or so URLs, you have five impressions. Whereas if you look at it on a query or on a site level, you see we have one impression. So that kind of mismatch from our point of view is on purpose so that you can kind of look into individual URLs if you want, but that you still have the clear overview of how often your site was shown in search by default for queries or on the site level. Woo-hoo, we have one more visitor. Something weird has been happening that people kind of get thrown out or something. I need to figure out what's been up there. I don't know, is there anything from your side that you'd like to ask? Morning. I wanted to ask about the new fetch and render tool in Search Console. I've seen there's already been some discussion about it. But we've got a lot of websites that just aren't rendering properly when it delivers the mobile image of the site if that makes sense. I know there has been comments about, well, people saying that it's nothing to worry about. But I just wanted your take on it, really. So how do you mean like images in the mobile version? Images, JavaScript, CSS, it varies between sites. But it comes up saying basically that Google's not been able to fetch the assets, but they're not blocked in robots or text or anything like that. It's just saying that it's not been able to get them. OK. So what might be happening there is that we run into kind of a timeout situation. That's something I've sometimes seen, especially on more complex pages where you have a lot of embedded resources. What happens in the inspect URL tool is we have a fairly tight timeout there so that we can show you a result in a reasonable time so that you can kind of see, does it generally work? For web search, in general, we have a lot longer timeout, and we cache the individual resources. So instead of needing to crawl 100 image and 10 CSS files and JavaScript files to render one page, we can do all of that in the background. We can do that ahead of time. And then when you render the page, we have all of that from the cache, so it kind of works out. But for the inspect URL tool, we try to get everything as fresh as possible. So none of that caching takes place there just to make sure that we show you the current view. And sometimes what happens then is we run into these hundreds of images and CSS and JavaScript files, and we just can't get them in time to show you something. So that's probably what you're seeing there. I do wonder now, though, because some people have also mentioned this to us, if we should maybe have an option there to say, OK, I don't care if it takes five minutes. Just I try to get everything. Yeah, maybe. I was just concerned whether it was actually how Google was seeing the page, but I think you've answered that thoroughly. Thank you. Cool, now, it does seem like something we need to make a little bit clearer, at least in the user interface, though, so that. I think there is a 20-second timeline from what we've spent ages looking at trying to work out why it's not working. And it does look that anything after 20 seconds just doesn't get fetched. No. OK, I think that's something we can work on to try to figure out at least ways to make it clear in the results that it's like this is, because of this specific thing, and you don't need to panic if we can't pick up all of your CSS or JavaScript files. Maybe try it again, or double check the index version instead of just checking the live version. That sounds good. Cool, thank you. Cool. All right, since you're the only one here, at the moment, if you have anything else? Let me know. I'll ask the office if they've got any questions. I'm not usually at home, but I happen to be in the office today, so I'll ask around. Quite a good opportunity. Cool. All right, let me double check to see what else has been happening. None. No, OK, it's time. This morning. OK, so looking at the comments, otherwise it's like people are getting a message that they're not allowed to join this call anymore. OK, well, I guess you have to be Anthony to get in, so. Oh, well, OK, cool. All right, so let's take a break here. Thank you all for joining. Thank you all for trying to get in in between. I hope this was kind of useful. I'll try to figure out what's happening with people getting removed from the call and see what we can do to prevent that for the next time. All right, have a great weekend. See you next time. Bye.