 All right, welcome, everyone, to today's Webmaster Central Algorithm Office Hours Hangout. My name is John Mueller. I am a webmaster trends analyst here at Google in Switzerland. And part of what we do are these office hour hangouts, where webmasters and publishers, anyone who cares about web search, can join in and ask questions and comment on how things are going. Looks like we have a bunch of people here already, which is awesome. Do any of you want to get started with maybe your first question or something? Hi, John. So I have a question. If I decided to block all the IPs that use Proxies, is that something that Google can see as a negative impact? Probably not. So as long as Googlebot can still crawl your pages, then we don't really care directly about what you do with the other IP addresses. Obviously, if you're blocking legitimate users and they can't access your content, they can't buy anything, and they wouldn't be able to, or they'd be less likely to recommend it to other people. But that's ultimately up to you. So some sites block users in certain countries. Some sites are very strict with regards to abuse that happens from IP addresses, and IP blocks that's ultimately up to you. Some CDNs offer that they hide your content based on the country of the visitor. So for example, if somebody is coming from Korea, for example, you have this capture icon, so he needs to solve the capture in order to see the content. And there is also another case when the, I believe, Cloudflare is doing that. They say, we are checking your browser. So in case our website have issues with security and we believe somebody is using some kind of automated tools in order to do something with our content, is using CDNs doing that will hurt the search performance in some way because you are basically ruining the experience for people by slowing down the initial process of loading the website and also hiding the content from the user. So it's not visible at the moment of the load. That's ultimately between you and your users. So that's something if you think for your legitimate users, the ones that you care about your website is fast and accessible, then that's fine. If you think that maybe you're blocking legitimate users accidentally using these systems, then that's probably something you want to fix. The important part from Google's point of view for SEO is just that we're able to access the content with the normal Googlebot IP addresses. And I believe a lot of these systems also have a provision for automatically allowing search engine bots, so probably that will be OK. It won't be seen as a cloaking because you are serving basically different content in the initial load to the user and the bot, right? That's ultimately up to you. Thanks. All right. Any more questions before we get started? Hi, John. Hi. Hi. So my question is somehow related to this first question. So we are detecting basically like implemented client-side redirection for like our URLs in most of your website. And the thing is like when we try to load the pages, so they try to get the pages from the engine X and then through some addings and parameters where you are in it. So the content is same. But they are kind of showing through the different parameters along with our game URL. So is there kind of issue with that? Or I mean, if Google can see, I mean, they have a different sort of URLs. But the content is same. So how we should be like? I think you'd want to be careful in a case like that because it kind of depends on how Google is able to crawl through these different URLs and what happens with those different URLs. So for example, if you redirect to a URL with a session ID, let's say in a worst case, then that means we will find the session ID URL. We'll try to crawl and index it. And then in a second step, we'll see, oh, it's the same content as we've seen before with a different session ID. So it's a lot of extra work for us to index and crawl all of this content. So that's something I generally try to avoid. Just serving different content, maybe like personalization in that sense. That's less of an issue. But if you're changing the URLs on the fly, then that can make things a little bit more complicated with canonicalization with us knowing which URLs belong together, all of that. Yeah, so the thing is like we have already had the canonical on all those URLs. Basically, they are all pointing to the main pages. The only thing is we consider maybe because of that pre-direction, maybe Google. Yes, there are chances we have Google finding those URLs through other websites basically when they're linking to our website. And some of the pages are also indexing that particular patterns. So just wondering, I mean, if Google tried to kind of see the pages, I mean, they might have just forward some other pages also exist for these specific URLs. So I mean, will it kind of see like, OK, why these websites this doing certain manner like having two pages with the additional pre-direction there allowing user to see the content? Yeah, I could imagine that that gets a little bit tricky because we will try to crawl those different pages, which means we first have to see the rel canonical on those pages. So we have to crawl all of those pages, which means we have to potentially crawl a lot more URLs than is actually necessary for your site. So if you have content that changes quickly on your website, that means we have to crawl all of these duplicates first, and then we see the change content. So anything you can do to simplify that, I think definitely makes sense. Also, if users see those different URLs, they might link to those URLs. They might say, oh, this is a fantastic website. I'll link to that URL. And if they're linking to one of these URLs that has weird parameters, then it takes a lot longer to kind of process all of that and then find the rel canonical and follow it back to the preferred home page. So the easier you can make it for us to find the right URLs, the more likely we'll be able to pick them up quicker. Sure. Thank you, John. Sure. OK, so let's take a look at some of the questions that were submitted. It looks like a bunch of these questions are fairly long. In general, the longer questions are a little bit harder for me to answer live because it's really hard to get into the details there. Those are also the ones where I usually recommend just starting a forum thread and getting input from the webmaster community. They're a bunch of really smart and friendly people there that can help you to figure the details out. So the first one kind of goes like that. I'm writing on behalf of one of our largest online retailers in the US with regards to prescription eyewear. And we're recently seeing some changes in ranking with regards to our website, essentially. And that's something where, from my point of view, I don't really have much insight into a generic situation where you're seeing a change in rankings because changes in rankings can always happen. Even if a website has been around for a long time, that doesn't mean that the ranking of that website will always be the same as it was before. Because if you start a new website, you also want to have a chance to make it to the top of the search results. And similarly, if you're just coasting in an extreme case with a website that was really good a couple of years ago, then maybe that's not actually what users still expect to see in the search results. So these are all situations where changes in ranking are essentially normal and changes in the way that we put together the search results. Those happen all of the time. And we try to work on improving the relevance of our search results, matching the changes in user expectations as well over time. So for something like this, I would go to the Webmaster Health Forum and mention your URL, mention some of the specific queries that you're looking at or you're seeing issues. And then the top contributors there, the product experts, will generally be able to point you at some things that you could be looking at as well, maybe giving some tips. Or in a worse case, they might be able to flag this and say, hey, someone from Google needs to take a look at this exact example here. These specific queries where we're showing results that are really bad, which could be a lot better or used to be a lot better, perhaps. So that's kind of the direction I would recommend going there. John? Yeah. Hey. So obviously, you guys have said you would confirm updates related to maybe, I don't know, page speed and stuff like that that you could pre-announce. So Webmaster is going to get prepared. There are some cases where you will confirm algorithm updates that have happened that are not necessarily pre-announced, that are related to just core search rankings. Obviously, there's been a lot of rumors spread by some people around algorithm updates in the past couple of months or so, a lot of them, that have not necessarily been confirmed. I guess you know behind the scenes what goes through the decision process with the search team, whoever's confirming these updates in terms of, oh, we should confirm this core algorithm update and not confirm that core algorithm update. What do you decide is a certain type of percentage of change that happens in the search results? Depends on who you're asking. Is it a certain type of mood? Does it happen to be what the religion is? What's the answer? I don't think for the most cases, there is no kind of explicit guideline where we say if it crosses this specific metric and goes as far that we would announce that explicitly. One aspect that, from my side, plays a strong role here is whether or not there's something really actionable from the site owner point of view with regards to this change. So things like page speeds or the mobile friendliness update, the mobile indexing changes, those are all things where we do make changes in the ranking. And it's something that a webmaster can explicitly influence. So if we tell them about this and like, hey, we're going to do this because we think it's the right thing, then they can take that and say, OK, I can work on this. And I can make changes to my website to make sure it matches these expectations. That's something from our point of view that definitely makes sense to announce. The general kind of relevance updates that we have in search, the core ranking updates that happen, those are things that happen all of the time. And they happen, I guess, on a daily basis more or less. And for a large part, these are just small shifts that happen in search. And it's not something where we'd be able to tell people, hey, we think your site is less relevant for this query. Therefore, you should make it more relevant because that's not really that useful of feedback. So it's something where we tend not to announce those too broadly. And sometimes when a lot of people start talking about these kind of updates and we see that people are really confused, then sometimes we'll say, OK, fine, we said we wouldn't talk about this, but maybe we just want to confirm that actually we did make some changes here. It's not that you're seeing ghosts. And there's nothing explicit that you can work on directly with regards to these changes. So that sometimes comes into play there. Generally, the more noise the community is making, the more likely you'll go ahead and confirm that, yes, you may have done an update that day. I wouldn't frame it as the more noise. I mean, obviously, community can be really loud at some times. But when we see that, people are genuinely confused. And when we see that the people that are confused are really people who aren't this SEO bunch that understands that changes happen all the time and Google made this other tweak here and changed that, then that's something where I think it makes sense to talk to those people and let them know about this. For the most part, the normal kind of ranking changes that happen all the time, people talk about them in forums and their sites go up and some of them go down, that's not really something where we really have something useful to say. And even confirming that would be essentially like, yes, we made changes again. It's like, we make changes every day because we're not always playing pool all day. We're actually working. So you guys literally release code changes to the core algorithm every single day, or almost every day. Changes every day, meaning UI experiments and stuff. Forgot experiments, but core releases every single day to the algorithm? I don't know what the last numbers were that we published. I thought it was something like 2,000 changes a year and like several thousands of experiments. Yeah, a lot of those changes could have been UI changes or treatment changes and stuff, not just core algorithm changes. Well, all kinds of changes have an effect in the search results, right? I mean, otherwise, we wouldn't be making them. And sometimes small UI changes throw people off as well. So that's something I think it kind of makes sense to put all of those together. OK, if you ever want to tell us what's core algorithm changes per year versus the other, that'd be great. I don't know where you would draw the line with regards to core algorithm updates. So if we make a change. Things that specifically impact the 10 blue links, not future snippets, not local packs, not just 10 blue link ranking changes. So if we change the algorithm for capitalization in the search results. Just ranking. Pure core ranking, not like you're making up all these kind of subtle differences now. No, I'm not. The core ranking, I don't care how it looks, just the 10 blue links in terms of how they're ranked. That's what I would love to do. Just a request. Maybe as a Christmas present. We're caught up next week. Oh my gosh. No pressure. I don't have any. I don't expect us to pull out those kind of numbers. And I think it would be really hard to kind of evaluate what those numbers would mean. Like if we were to say out of 2,200 were core ranking changes. What does that really mean? It just gives me more of a frame of reference to say. People always say, oh, you wrote a blog post about core algorithm about an update. And like, oh, they do 2,000 updates a year or 300 updates a year. But a lot of those updates are not specific to ranking. So it just gives you more context. Go on. I don't know. We'll see. We'll see. All right. So now we have a question that says it's very easy for me. I just started a website and indexed the homepage 30 days ago. Now I want to change the whole title of my homepage. Does changing the whole title affect my site's ranking? Yes, probably. I believe we do take the title into account when it comes to ranking a page. It's part of the content on the page. It's also something that, in many cases, is visible in the search results. So if you change things on your pages, then that can affect how those pages are shown in search. How long will it take for my homepage and other pages to reach page one in the search results with proper SEO and good content? And in parentheses, I'm better than my competition. So I think all of us would like to know the answer to this question. And I don't know if we have to do a core algorithm update to shuffle things around so that your site gets on top. But there is, unfortunately, no fixed timeline for things to reach the top results in search. Depending on what happens, it could be that your page, from the moment that we discover it and crawl it, is number one for some types of queries, where maybe you're just writing about a topic that people have suddenly gained a lot of interest in and there's no other relevant content out there. Then that might result in your page being on page one. But in general, this depends on so many different factors with regards to your page, how it interacts with the rest of the internet, how things are seen for those specific queries. Sometimes the factors that we use for individual queries, they vary over time as well. So for example, if you take any recent event and you search for that, then you'll probably see more recent results. Whereas if you search for that a couple of weeks later, then you'll see more kind of traditional results. So a nice example here would be if you search for campfire, then you might see something about a camping site or you have a small campfire and you're cooking something or you might see something about the big wildfires that they had in California recently. So that's something that will definitely change over time. And depending on the type of content that you have, maybe you'll be number one. Maybe you'll never be number one because there's so much other good content out there as well. It really depends a lot. Can you talk more about authorship and the importance of it in determining the expertise of an article? Is it important to have author bios for each writer on the website? Does it matter how many articles a person has written? So this is something I guess that comes up again and again. We used to have the authorship markup on pages. We don't have that anymore, or at least we don't use that anymore. But otherwise, it's also something that I think users care about quite a bit. So instead of thinking about this as an SEO problem, I would think about this more as a user problem and think about, like, how do you gain a user's trust? How do you make it clear to them that the content that you're providing is actually relevant content, that it's actually something that they can trust? So I would see that more as a user experience-type issue here rather than a pure SEO issue. Obviously, search algorithms try to understand what users are thinking and to rank results in a way that they would find relevant. So some of that could play into that as well, more indirectly, probably. But primarily, you can work on this on a user level, do user studies, and see how users react to that, or if they care about it at all. Maybe for your specific site, it doesn't really matter. I have a question. Can I ask you is related to that? All right, let's go for it. Recently, I mean, not recently, but there has been a lot of debate and discussion about this EAT and what's in the quality durators, guidelines, and so forth. And most of people that kind of think to these things are thinking of these kind of questions like the one that I just answered about, oh, do I have to have violins of authors in my website? Do I have to? And I think folks sometimes forget that it's a bit more about just product quality. I don't know if you would agree with that. If basically referencing what Amit Shingal wrote in the earlier posts about Panda and so forth, so on, it would be something much more useful for people to look at than trying to guess if it's a byline that they need on their website. If it's three introductory lines before their main content or if it's X amount number of images on their posts. Do you want to elaborate a bit more on that? Because I've been saying that it's mostly about product quality, but people kind of tend to think to these details about what makes quality. But it's quality. It's different for everyone that assesses quality. Yeah, I think that totally makes sense. And it's something where it's also worth looking into what search engines are trying to do in the long run, which is trying to figure out which pages are good, which pages are relevant for individual queries, and what they might be doing in the short term, like looking at individual meta tags and looking at this specific technical detail here, that's something that is essentially just done with regards to the view on the long term goal. So focusing on these short term tweaks, like how many links you should have on a page, or how tall your author bio photo should be in pixels, those kind of things, I think are very short-sighted. And it makes more sense to really focus on what can I do to provide a really high quality product in a way that users look at that and say, well, this is really relevant, this is fantastic content. And when Google doesn't show it in the search results for these queries, then Google has a bug. And if that's how users think about your website, then I think you're kind of on the right track. John, by the way, regarding this current discussion, I know I sent you some information regarding that August 1 update for some Romanian queries that I managed to get that and send to the team. Yeah, I forward these on. I think there's another question from some Brazilian sites as well, which I think the non-English content is always a bit trickier, so that's always interesting feedback to forward on to the folks. OK, cool. And let's see, there's another question here that kind of goes into something similar. Is it sufficient to display the date the article was last updated, or is it also required that we show the first published date? That's something that you can do either way. Personally, I like having both of these dates there. There's also structured data that you can use for both of these dates. So if you provide it on your articles, then I would also look into the structured data and try to use that as well. I think both of these dates can be useful for algorithms. John, would the structured data be used with Google prioritize that over whatever text content in terms of dates? Would Googlebot prefer the structured data form? We prefer when things are consistent. OK. Let's put it that way, because that's something where we have run into issues in regard to that. Specifically for news sites where we see one date on the page and a different date in the structured data, or with breaking news type things, if it has one time zone in the structured data with one format of structured data and a different time zone in the structured data elsewhere on the page, then that can be really confusing for us. And then you'll see weird things like this breaking event that just happened 10 minutes ago and in search results it will say it happened five hours ago, which is just because the time zone issue is here. And similarly, if you don't have a time zone on the structured data, then we kind of have to guess and figure out what might your local time be and how does that map. For most things in the long term, it doesn't matter. But in the short term, especially for breaking issues, that can play a role. But you mentioned you'd personally prefer both the published date and the update date of a certain date article content piece. Would Google have any preference then in structured data? Maybe it doesn't differentiate, which is the update date, which is the published date, or what would you recommend in that case? You can mark them both up, so they're both in the structured data. All right, Julie, I think you had a question. Yes, I did. Do you want me to explain it? Sure. OK, so what I wanted to understand is Google rankings and blogging. There's a particular realty website that, if you blog very modestly, you get top Google rankings. And I think it's because they're using global, they're using blogging for all of their agents or doing some other kinds of things to just rank their website. And what I'm wondering is, is it just simply that nobody's blogging on Maui for realty? Or is it somehow, even though each individual site is a sub-site of the main site, any blogging gets you ranked because your primary domain is Hawaii like that? I would assume that the two aspects are not directly related, that it's more a matter of you're putting out new content. You're putting out content that's relevant for people who are searching at the moment. And that we're ranking them based more on that. So that's something that we often see from various sites. Like, should I be blogging? Should I write 10 articles a day or five articles a day? Or how much is too much? How much is not enough? And from our point of view, it's not a matter of going out and blogging and creating so many articles a day, a specific number. But more a matter of, well, you have some really fresh content here. Some of this content is really relevant for some searchers at the moment, so we'll show that on search. And it's not that blogging itself is something that makes the site rank higher or makes the content rank higher. It's just, well, you happen to have some new content here that happens to be relevant, so we'll show that. And that can be done in different ways. Like, maybe you can just put it on your website in general. Maybe you have a blog where you put this kind of content in a more structured format out on a regular basis. That's essentially up to you how you handle that, kind of the mechanics of how you put that content online. And for some kinds of content, I think it definitely makes sense because you cover kind of a broader range of content, a broader range of potential queries that could be coming to your website. So it's completely independent of the domain that I'm blogging on. Like, there's a lot of what's going on on that website other than mine that has no effect. If I started by own.com, it was blogging, I would have the same effect. Pretty much. I mean, there are always some kind of supplemental effects with regards to us being able to find the content quickly, us being able to understand that this website is generally creating high quality content. So there is some amount of additional information that we collect for the website on a whole. So it's not that you could just create random URLs on the web and put your blog posts up there and we would find them magically and rank them number one. It kind of does require some amount of structure within that so that we can understand that over time, actually, this is pretty good content. And we should check it regularly to make sure that we don't miss any of the updates. OK, so there is something to the web, the domain itself, and that has got your attention. I don't know about this specific case, but that's something that you can also build up on your own. If you're saying, well, I can either blog on my own site or I can blog on this other site, then you can kind of make a strategic decision, say, well, I'll build up my own site over time. That's kind of my own property. I can keep that forever. You might take into account that in the beginning, it'll be a little bit harder for us to kind of find this content for it to be visible in search. But in the long run, you're building up your own reputation rather than building up someone else's reputation. Right. OK, so it's not that hard to achieve. It just may be a slower start. I don't know about not that hard. It really depends on kind of the competition there. But it's definitely something that you can take on and something where you can make a strategic decision on, and say, well, I want to invest in this and it's for myself for the long run, or I want to have kind of an easy route and kind of take into account that I'll have to continue working on this all the time and use maybe a different platform. That's ultimately kind of up to you and there are different pros and cons of each approach. Thank you. Thank you for having this forum and for taking my question. And can I ask you for a reference on another issue? Sure. A friend of mine had a child that died recently and they're trying to access his cell phone. They want to know, you know, get as least as contact and pictures and stuff like that. But it's Google locked. It's a droid. And we're trying to override it. It's a Samsung, we tried to reset the password to a Samsung and it find my phone works. But we can't seem to get it. It's like it locked out biometrics and everything else, I guess, in some kind of a defensive mode of us trying to get into it and just trying to help the parents out. I don't know where to turn at this point. There's a health center article from the Gmail side, I believe, with regards to specifically accounts that you're trying to access from people who are deceased. So that's one approach I would take there. And if you can't find that, then I would post in the Gmail health forum. You don't have to mention the account or any of the personal details. But there's one that should be able to guide you through that. I assume it's not a very easy process because we also want to make sure that people can't just randomly take a phone and say, oh, this person's phone. And I need to access the information there. But there is a process for that. OK, great. Dylan, Dylan, Doug. Thanks. All right, let's grab some more that we're submitted. Let's see. There's a specific website here where the traffic went down from $150,000 to $10,000. I think this is probably also one of those issues where posting in the webmaster health forum would make a little bit more sense because folks there can take a look at the specifics and maybe give you some individual information on what you could be focusing on or looking at there. And reviewing these live is always a bit tricky. So I recommend posting in the forum for that. Is there any problem with Google Search Console fetch as Google? All is loading fine, except the images. No errors about this. Everything is loaded in the source code. So I think that there are two aspects that you could be running into there. One is that perhaps these images are blocked in some particular way. So that could be through robots.txt, for example, that they're blocked from loading, or somehow they're not accessible wherever they're hosted from Googlebot's IP addresses. That could be one option. Another option that could be happening here is that we're just running out of time when rendering these pages. So in particular, for these testing tools, we try to fetch all of the content as fresh as possible so that we can show you the current preview. And by doing that, we sometimes need to request hundreds of different URLs, depending on how the page is structured. And that can result in us saying, well, we don't really have time to fetch all of these. We'll kind of tend towards giving the user a faster preview rather than a more complete preview. So that could be happening here, too, that maybe we've seen these images, but we just can't fetch them live that quickly to show you the preview. For indexing, it's a little bit different because we have more time. We don't have to give you an answer right away. We can take the HTML page, and we can fetch all of the embedded content over the course of a day, or even longer. And we can cache all of that content for a fairly long time so that the next time we look at this page, we'll say, oh, that image, I've seen that before. I don't have to get that one fresh again. So that's probably one of those two options is what's happening there in your specific case. And if everything else is loading fine, then that sounds like things are lined up properly, that things are working as expected. You can perhaps look at the number of requests that this page is making using a tool like web page test, or checking in Chrome and the developer tools. And if you see that there are hundreds of different requests that are required to load this page, then from a speed point of view, it makes sense to clean that up a little bit. And that should be reflected in Fetch as Google as well. But in general, if things are just timing out with Fetch as Google, that's not something that you usually need to worry about with regards to indexing. Why does Google index XML sitemap links? When I search for a URL in Google, it shows an XML sitemap link where that URL is listed. That's kind of normal. It's not really unexpected, at least. So an XML file is something that we can index. And within the XML file, you'll have the different URLs that are mentioned there. And if we happen to index your sitemap XML file, then we could be showing that in the search results. In general, for normal queries, those pages wouldn't be showing up because there's nothing really relevant or important on those pages compared to the rest of your website. But if you search explicitly for content within the sitemap file, then maybe that'll show up in search. And that's not really a problem, per se, because nobody would explicitly search for this URL in search and expect to see anything other than maybe that URL is mentioned somewhere on the web. So usually, I wouldn't worry about this. If you do want to make sure that your XML sitemap file is not shown in search at all, you can use the xRobots tag HTTP header for XML files. And then we can drop those pages out of our search results, kind of like how you would use a MetaNo index robots tag on a normal HTML page. Hey, John. What about HTML sitemap? So when I search for my brand name, so with the extension, that's a TLD, I get this sitemap URL with my sitelink. So is it something normal, or we need to do something with that? So an HTML sitemap is essentially just a list of links, like a normal HTML page. And that can show up in search results as well. And that's essentially normal. Most of the time, these HTML sitemap pages are used more for navigating, for helping users to find other content on your page. So you could think about putting a no index on these pages if you wanted to. But for the most part, I wouldn't worry about that. People can go to those pages, and they can find the rest of your website as well. Thank you. All right. How long does the View All page need to be so that it would make sense to be the canonical for pagination pages? There is no absolute length or kind of requirement or recommendation with regards to View All pages. So you can make it as long as you think is necessary for those pages. There's no specific limit or recommendation that you need to do that. We're also considering the option of an HTML sitemap, which lists all products together on one page to enhance discoverability. I guess that's kind of like your question. I think that's an option as well. If you think that users would go there and they'd be able to find your content a little bit faster, that's an option. We can also crawl through those pages. So that helps us a little bit as well. On the other hand, if these pages are totally artificial and you have thousands of product links on these pages and they're not useful at all for users, then chances are we're not going to have or find much value in those pages either. So that's kind of the pros and cons there. So I try to make these HTML sitemap pages in a way that they actually work well for users. And there are a bunch of really good recommendations out there on how to do that. There are also people who say you shouldn't use HTML sitemap pages because users should be able to navigate your website in a normal way without going to kind of a table of contents, which also is a point to keep in mind. If we use the Google Translate API from German to French and Italian, all of the comments on our website, what's the correct way around this? Should I use kind of a span with a language tag associated with it? How can I do that? So we don't use the language attribute on individual HTML elements that we try to recognize the language of the content directly. The language attribute makes sense for screen readers, which kind of shift to the different language kind of pronunciation. So I wouldn't drop it completely, but consider that most search engines probably wouldn't be using it. For auto-translated content, we prefer not to be able to index that. So if you're using an API to translate parts of content on your website, then ideally, you'd be doing that in a way that Google doesn't actually index that content. Because for the most part, this automatically translated content isn't really that high quality. And that can really throw us off of it. So the other option that you mentioned there let the user translate by clicking a button, I tend towards that a little bit more. How bad is it to show multiple languages on the same page from our point of view? That's an option. You can do that. What sometimes happens is in search, we'll show the translate link in the search results if we're not sure how much of the content is actually translated. The blogging question, I think we talked about a little before. Then there's a longer question about search console data with regards to the top 1,000 page reports in a combination with the country filter. I wasn't able to follow your question that well. So if you can post a link to your forum thread with the details, I'd love to take a look there. It's a bit hard to understand exactly what you're seeing and where you're seeing those differences. But I'm happy to take a look at that with the team. Since the beginning of July, impressions regarding specific search terms of a product in our portfolio have tripled with clicks staying roughly constant. At the same time, Google ads data shows no increase in impressions of the corresponding ad campaigns. What can be happening there? I think that can happen. I think that's kind of a normal situation, especially if a page was kind of fluctuating in and out with regards to the search results a little bit, maybe coming from page 2 just to the bottom of page 1. Then those are subtle changes that you might not see reflected immediately in clicks or in ranking. But they can have a big impact on the impressions that your pages see from search. So if a page from your website is moving from page 2 to search results to page 1 and the search results on the bottom, then that would be reflected in impressions as well. So the impressions are counted when one of your pages is shown in search. It doesn't matter if that's on the top of the search results page or on the bottom. All of that counts as an impression. Whereas if it's shown on page 2 and few people click to page 2, then you'll have significant lower impressions there. And that could also be why you're not seeing those changes reflected elsewhere. Like, for example, if you have an ad campaign running or if you look at the overall impressions, I believe, in ad words, then that's something where you probably wouldn't see that change. Old Search Console has a message that crawl errors reports are being replaced with a new index coverage report. In the old Search Console, there's a useful report of 404s listed under crawl errors in the new Search Console that's only shown for all known pages. It's a way to get all the 404s in the new Search Console. So as far as I know, we don't have that one-to-one mapping of the crawl errors report in the new Search Console. But that's something where we've also seen in the past that people focus a little bit too much on this list of crawl errors and miss out on the issues that actually play a role in their website. So I'm kind of happy that that shift has moved more towards, like, these are issues that are actually affecting your content compared to these are a bunch of random URLs that we found on the web, where your server returned a 404 probably correctly. Because all of those random 404s for your website are not something that you need to deal with in any particular way. It's completely normal that we see tons of 404s on a website when we try to crawl it. When we find random links on the web pointing at pieces of content, that's usually not something that you really need to worry about. So I think that focus more on these are issues that are actually affecting your site that kind of make sense and should make it a little bit easier for site owners. Does web.dev use Lighthouse for its metrics in the same way that PageSpeed Insights does? I don't know for sure, but I believe web.dev is built on kind of the Lighthouse infrastructure in the back end. So we're trying to make these a little bit more consistent. With web.dev, obviously there's a little bit of tracking in the background to kind of see if the advice that we gave you is actually useful for your website. So all of that kind of plays in there as well. The idea is really that we give you a comprehensive view of the different effects that we can pick up on with Lighthouse that you can also track on your side where maybe you run Lighthouse on your own and test it like that. I want to benchmark our web pages and see how we can improve them. Why is the first contentful paint for field data half that of it for lab data? I don't know offhand. So I believe there is a PageSpeed Insights forum out there. So I would post about that specifically maybe with your URL so that people can take a look to see what is happening there. I would expect that there are some situations where the lab data is different from the field data just because of the way that we calculate things and the way that maybe users don't have the kind of default device that we would use for the lab data. There's another question about PageSpeed Insights with regards to the speed score. Let's see. I suspect you'd probably be best off if you posted this in the PageSpeed Insights help forum as well so that some folks there can take a look. With regards to ranking, when it comes to speed, we use a variety of different metrics to kind of figure out how this page is with regards to speed, including some lab data and some field data. So it's not explicitly mapped one to one to any of these metrics, primarily so that we can also update that over time, but also because we think these metrics are useful for you to kind of improve on your website, but that might not necessarily map one to one to what we think is important for search. I don't know if that's the right way to frame that, but essentially, the metrics that you see in these tools generally don't map one to one with regards to the way that we rank them in search. Please tell us if the hreflang is valid because of encoding we receive from many audit apps errors for the hreflang URL. So one URL is kind of with the escape format versus a version where it's kind of written in UTF-8. From our point of view, I didn't double check the encoding of this particular URL. Also, it's just on domain.com, so it's hard to try out directly. But in general, the encoded version and the kind of written version of a URL are equivalent for us. So if you use one or the other, that's perfectly fine. In general, I recommend just picking one and using that consistently within your website just for making it a little bit easier on your end for tracking changes and making sure that things are consistent. But essentially, on our side, these URLs are equivalent. If the encoded URL matches the one that's kind of written out. After inserting JSON-LD markup directly into my WordPress editor, WordPress is wrapping JSON script tags automatically in P tags for displaying on the front end. That sounds like something where maybe the WordPress editor is doing something that you don't need to do there. And it probably breaks the JSON-LD markup if it's invisible. So I'm pretty sure that would not work if you have the JSON-LD markup in a visible way. Also, it would be pretty confusing for users to see that on a page. I believe there are some WordPress plugins out there that make it a little bit easier for you to add this markup in the correct way. So I take a look at that. And the second part is injecting it with JavaScript, for example, after loading the page. Would that be recommended or not? I generally try to put the structured data directly on the page if at all possible. It makes it a lot easier for you to double-check that things are working correctly. And with that, it also makes it easier for us to pick up the markup directly. For pretty much most pages, we do render them. It just takes a little bit more time for us to actually render them. And sometimes it's a little bit hard to debug what exactly ends up being rendered. So if you can make it easier for yourself and make it easier for us by putting the markup on the page, I tend towards that. And then there's a question about Brazilian pages dominating the Portuguese search results for sport nutrition terms. I'll take these examples and bring them back to the team. If you have more examples where you're seeing weird things like this happening recently, I'm happy to pass that on to the team as well. So that's always useful to have explicit search queries there. All right, wow. I have a quick question about AMP. Can I please ask it? Sure. Awesome. So do we need a canonical tag on AMP pages linking to the regular non pages now, or is it still OK to not have that? We need. So if you have the AMP page connected to the kind of traditional HTML page, we do need that canonical link from AMP to the HTML page. And together with the link, AMP HTML from the web page to the AMP page. Without that two-way connection, we would probably treat those pages as just being separate pages on your website. So on the non AMP, you want us to link to the AMP, and then on the AMP linking to the non AMP. OK, got it. Thank you. Cool. All right, I have to run a little bit early. But if any of you have one last question, feel free to jump on in. Josh, you don't even go on with that Google News thing. I know you guys have said you fixed it multiple times around Google News sites being indexed and ranked. Do you think it's all butted up now, or are you not really involved in that at all? I just see things on the side there, so I have no idea. OK. I'd be surprised if things are still kind of stuck this long. But sometimes something weird happens on the infrastructure, and it takes a while for that to get cleaned out completely. Got it. Thank you very much. All right, Mihai is posting all of the documentation links. Fantastic. Cool. Let's see. Wow, that chat has been pretty active. And everyone is hungry. That's good. Cool. All right, so let's take a break here. Thank you all for coming. And it's been great having you. Lots of good questions this time as well. The next one is lined up for Friday. So if any of you still have questions, feel free to pop those questions in there. Or as always, feel free to join us in the Webmaster Help forums, where some of the people that are active here are also active, which is awesome. Cool. All right, wish you all a great day. And see you next time, maybe. Thanks. Bye. Bye. Bye, John.