 OK, welcome, everyone, to today's Webmaster Central Office Hours Hangout. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hour Hangouts, where webmasters and publishers can jump in and ask any kind of website, web search related question that maybe is ideally kind of connected to web search. As always, a bunch of things were submitted ahead of time, which is great. But if any of you want to get started with the first question, you're welcome to jump in. OK, I'll jump in. OK. I found yesterday that some of our links from our website are not included in the top link of the internal page in Google Search Console. We are sending this link through the side map. But I think that Google is not able to find it crawling our website. This could be something bad for our website or not, because these are generally faceted filter into our website. So maybe it's not so important. But sometimes it could be. I generally wouldn't worry about that too much if the website itself is still crawlable and indexable. So if you're seeing that your new pages are findable in Search fairly quickly, then I think that's OK. I don't think you need to do anything specific there. If you notice that individual pages don't show up at all in Search and you think they should be linked within your website, then I would double check that. On the one hand, you can check manually per page. On the other hand, there are a lot of really neat website crawling tools out there that you can use to crawl your website to double check is all of the content properly linked within the website. OK. Because for the user, the links are available. But for Google, I think it's most difficult, because the user have to click before the ender. And so there are a lot of filters. So Google cannot access this content because Google cannot click on this website. And so maybe it would be better to have this link ready for Google and not just to send it by sitemap. Yeah. So a sitemap file helps us to find new and updated pages. But ideally, these pages should also be linked within your website. So I wouldn't say you should only use a sitemap file. You should really, ideally, you should have the sitemap file to help you. But you should primarily use the internal links within the website to allow crawling. OK. Thank you. Sure. Hi, John. Can I ask a question? Sure. Sure. Thanks. So I have a question regarding this brand perspective. So usually, I have one presence in India, which is very popular. So we are thinking of going in foreign countries. So it's targeting US as a first. And we're thinking of changing our brand name there. Like I mean, for instance, in India, we known as OYO rooms. But on US, we might really want to target as OYO hotels. And we already have a knowledge panel for OYO rooms. So will it kind of affect our kind of domain wise in the US market wise? People might be looking for us for something else. And maybe we end up something else now. So what will be kind of the strategy toward that? But how we can resolve that? So when somebody looking for OYO hotels or OYO rooms, they can be considered as a one. I think that's always a bit tricky because you're doing a kind of a brand name change, which takes a lot of work and takes a lot of time. The approach of having two brands in different locations, I think that's also an idea that you could use. If you use something like Hreflang, between those different language or country pages, you can also have it for different brands, essentially. So you could be called one company name here in maybe in Switzerland, and a different name in the UK. And with the Hreflang, you can still connect that. So that if someone is searching for the Swiss name in the UK, we will show the UK name. So that kind of helps a little bit. But I think anytime you kind of separate things out into multiple brands, which is actually the same thing, it's always a lot harder from the tracking, from the search side as well. So I would think carefully about that. Sometimes you have to do what you have to do. It's not you that makes these decisions. So sometimes you just have to deal with it. But it's always a bit tricky if you have the same thing with different brand names associated with it. Sure. Thanks, John. So on that, I think we decided to having a separate brand entity itself. So is it possible to have a different knowledge panel for that itself or a different brand name? And again, when you said the Hreflang, most probably only the few, you can say, pages that are going to have a relationship with my other kind of website-wise, like let's say home page only. I cannot link my Hreflang to my other internal pages or other kind of those pages. So most probably I have to be selective towards the Hreflang. It's like, I mean, this needs to be linked from my home page only, or the common pages across all the countries. Yeah, yeah. I mean, Hreflang is on a per page basis. You can use it for the pages where you want. I would really recommend using it there where you see problems. So that's probably maybe the brand name, especially if it's a different name in different locations, then people in the wrong location might be searching for the other brand name. So with the Hreflang, you can kind of work that out. But that's something that you can do for the home pages or for the about us pages or whatever pages where you notice that people are going to the wrong version. All right, thanks, Sean. Sure. All right, let's jump into some of the questions that were submitted, a whole bunch of stuff. We followed the guidelines and updated the fabricons for Google Mobile Search in our multi-country website. You can see the new fabricons in Google Mobile Search for most of our domains after submitting the new home page and search console. However, a few of them still show the old one, even after waiting more than seven days. Two domains that were showing the updated fabricon two days ago have changed back to the old one. So in general, these kind of things can sometimes take a little bit of time to settle down. So especially if you're talking about a period of seven days, then I would still give it a little bit more time to clearly settle down. And I would make sure that when you're using it to cross a domain that you don't just update the home page, that you also make sure the other pages associated with the rest of the site are also updated. So that when we crawl and index those pages, then we see kind of a clear signal saying, for this domain, this fab icon is the one that you want. So I wouldn't see it more as a technical thing with regards to how do I link the icon that I want, but more a matter of you need to make sure that it's consistent across the site. And sometimes it takes a bit of time to clearly settle down properly. I've seen cases where it gets updated within a day or two. I've also seen cases where it takes a little bit longer. SVG images question, inline SVG versus image tag SVGs. As a front-end developer, we prefer inline SVGs because it loads faster and doesn't require us to host images. However, the problem is that inline SVGs and their attributes are not indexed by Google and Google images. Are there any plans for that to kind of change? I'm not aware of any plans with regards to that changing that we've talked about externally, so I can't really say much there. In general, what I would be careful with, though, is kind of the perception that every image that you have needs to be found in Google images. I think it's great to have a lot of images in Google images, but for a website, it's also worthwhile to think about what you want out of Google images, how you see users visually searching for your content. And with that in mind, it's often not the case that every single icon on a home page on a website needs to be indexed in Google images because people are not going to search visually for all of these different icons. So if you're using an SVG kind of as a user interface element, or if you're using an SVG as kind of a decorative element on a page, then that's probably not something that people will be using explicitly to try to visually find content on your website. I mean, it's theoretically possible that they would do that, but I'd really think about how might someone search visually with Google images and reach your website. And then think back from there, how can I present myself in the right way, at the right place, at the right time for these users so that they do come to my content? So that's kind of the angle I would take, rather than just purely I need to get every graphical element into Google images. Duplicate content question, why are there so many Lyric's websites out there? And I think it kind of goes into, like, all of these Lyric's are the same because it's like the same songs. Why is this content even indexed? I don't know why so many Lyric's websites exist. I imagine people like searching for Lyric's and like browsing around. So that's probably why they exist. From a search point of view, what happens here with this kind of duplicate content is we'll recognize that the block of text there is duplicate, but we'll also recognize that the rest of the page all around it is not duplicate across all of these different sites. People do really cool stuff with Lyric sometimes, give a lot more context rather than just purely, like this is a song title and here are the Lyric's. And because of that, we do try to index all of these pages and we try to bubble them up in the right place in the search results. So what usually happens is if you search for something that's within a copied chunk of text across multiple sites, we'll pick one of those sites to show and we'll essentially filter out most of the other ones. Whereas if we can tell that someone is searching explicitly for something that's kind of unique on that site, so maybe like a chunk of text from the Lyric's as well as, I don't know, additional information that's not directly in the Lyric's or maybe even someone knows your website and is explicitly searching for the Lyric's and then your website name, then obviously it makes sense for us to try to pick the most relevant one to show there. So that's something where the text itself is duplicate content, but the rest of the page isn't duplicate content, therefore it makes sense for us to index that. And when someone is searching generally for that duplicated piece of text, we'll try to pick one of those pages and filter out most of the other ones. But oftentimes people search explicitly in a way that makes one of these versions the most relevant one. Can you speak to the necessity of EAT and author biography pages linked from an article? Should we have an author's credentials on the article itself or is linking to the author's bio from their byline good enough? We have an issue where the author bio pages are metano index. Does this stop Googlebot or the quality raiders from accessing this page? So I think we talked about this question in the last English Hangout a little bit, especially about the quality raiders and how they don't rate your website. They help us to rate our algorithms. So that's not something that you'd need to explicitly worry about. With regards to how you link the author's credentials, that's something where we don't have explicit guidelines from a search point of view, but where you can kind of work on this yourself and think about how might users expect to find this information? And how can you provide it in a way that really highlights the value of your website of the content that you're providing there? I have a multilingual website that I'm working on. Each language has around 100,000 pages. There are six different languages, but only three of them are translated. For the others, the ones that are not translated is English. OK, and then it kind of goes into, so how do I deal with the non-translated versions? They're looking at two options. First, remove the hreflang for all pages and canonicalize, or second, fix the hreflang problem and use self-referencing canonical tags to those non-translated versions. So ultimately, both of those options could work. You could aim to have these pages indexed separately, especially since you mentioned for the non-translated versions, there's still a difference with the currency that you show on these pages. So these are kind of unique pages, and we might index them individually. On the other hand, if there's not that much value in kind of the uniqueness across these versions, then that might make sense to canonicalize. In general, my recommendation is to tend towards having fewer URLs rather than having more URLs. Just because having fewer URLs makes it easier from a technical point of view, on the one hand, but it also makes it a lot easier for us to concentrate the value of your website, of the content that you have there, in fewer pages. Because if we have to kind of spread it out across these different language versions, which are kind of the same and they're kind of competing with each other, then both of these or multiple of these different versions, they're essentially not as strong as they would be if they were combined into one individual page. So that's generally the direction I would tend towards, is try to find a way to have fewer pages rather than more. So if you know that you have different country versions where the content is actually the same, maybe there's a way to concentrate those into one version rather than having separate versions. If you do want to make separate versions, maybe you have to do that for policy or for legal reasons. That's fine as well. Using the hreflang between those versions is an option. What can sometimes happen, though, is that our systems recognize that the content is essentially the same. We will pick one of those versions and use that as a canonical internally. We'll use the hreflang to still split out the different URLs, though. So in your case, where you have the English and the Germany content, both in English, we might pick up the general English page as a canonical. We'll index that one. We won't index the Germany page that's in English as well. But when a user in Germany searches will know, because of the hreflang between those different URLs, we can show the Germany URL for users in Germany. So it makes it a lot trickier to track. It makes it a lot trickier to monitor in Search Console because we fold it together in one canonical version and then we split it out when we show it to users. But that's an option as well. So it's really hard for me to say what you should be doing there. My general approach is if you're unsure, then I would try to just have fewer URLs and work from there. If you really absolutely need to have separate URLs, then you have to have separate URLs. It's kind of the situation. And in an ideal world, if you do have separate URLs, then I recommend trying to get localized content as well. If you're already kind of going down the hard route, you might as well make it easier for you in the end. Google is now including mobile devices and image for some websites in the search results. The image is usually a square. And often, but not always, Google uses an image specified by the open graph tag. Is this feature documented anywhere? Do you have a first look at for the schema item prop image? How can I tell Google which image I want to use? I don't think we have that documented anywhere. My guess is that this is one of the many experiments that we run and we're trying to see how it works out. And my guess is that if we start doing this in a broader scale, then we'll have some structured data for you that can help you to kind of help us to pick the right image to show for these individual snippets. I'll definitely check in with the team, though, to see if there's something that we can provide there in the meantime so that you can specify images there as well. I have some guesses on how we pick these images, but I don't think that's very useful for me to kind of go through my guesses. And then everyone goes off and implements it. And then maybe I guess wrong. But I'll check with the team. Is it still good to build a quality website with good amount of traffic, guest posts, and in medical niche? One of them, one of my clients, is looking for quality guest posts in the medical niche. Or is it good? Is it good to take links from one quality website? So I'm not quite sure what you're asking. It sounds like you're trying to build links using guest posts. If you're talking about kind of like, where do I place my guest posts? Or where do I have other people place guest posts for me? And that's something I generally avoid doing. Just going off and creating guest posts and using them to drop links to your website. There are ways that you can collaborate in a reasonable way with other websites. And sometimes you can do that in a strategic way. But in general, I wouldn't just say, you should create guest posts and drop them on websites and then make sure that they get x amount of traffic per day. From our point of view, it would probably be seen as unnatural link building. And that could, on the one hand, be ignored by our algorithms. It could be picked up by our algorithms if it's done in a really big scale and kind of seen as something that you shouldn't have been doing. It could also be picked up by the web spam team manually. And they might apply manual action for this kind of thing. So that's something where I avoid just kind of blindly throwing out, I'm going to drop so many guest posts with links to our website on these five websites because they get x amount of traffic. Now that Googlebot is using the latest version of Chrome, the Google Smart has the Google smartphone user agent changed. Webmaster helps still list Chrome 41 in the smartphone user agent. I'm still seeing Chrome 41 in our logs. Also, when Googlebot is crawling and rendering pages, is Googlebot always the user agent? So the last one is definitely the case. When we crawl and index pages, Googlebot will always be in the user agent. I think that makes sense. We'll probably have to rethink how we want to set up the user agent in general now that everything is moved to a more kind of a modern Googlebot infrastructure for rendering. So I would definitely expect that we announce some changes there at some point in the future. I don't know what the timing there will be. We've been waiting a little bit to make sure that everything is working well and experimenting a little bit with the different settings. But I would assume that at some point we'll have a user agent that matches more what we actually use for rendering. So on the one hand, for smartphones, that's a little bit easier because we already kind of have a browser-like user agent. On the other hand, for desktop, I don't know how we changed that there. That's one thing that's always been a bit on my mind as well, in that the current desktop Googlebot user agent is not like a browser at all. And maybe it should look like a browser. I don't know. The difficulty with the desktop user agent is, of course, that lots of sites have this kind of hard-coded because that desktop user agent hasn't changed at all since, I don't know, last decade or whenever we implemented that. So that's kind of a tricky balance to do there. But I'd expect at some point that we talk about the new user agent names that we use going forward. We just didn't want to do everything at once because that tends to confuse more than it actually helps. Sean, any update on like with the new Googlebot? Can we expect these Googlebots can visit us apart from the US? Or maybe we can have more regional-based kind of Googlebots that can be visited from any country apart from just the US part? I don't expect to see a lot of changes there. So we do crawl from some individual countries. So there's a handful of countries where we know that it's hard to crawl from the US. So we crawl from local IP addresses. But I don't expect to see that changing in the sense that we'll crawl from every country because it's just not efficient. All right, thanks. Sure. Oh my gosh, a long question in the chat. Let me take a look at that. We're a news publisher. Or if you're here and you have a microphone, feel free to jump on in if you're still here, maybe not. We're a news publisher, a website primarily focusing on business finance vertical. We probably have been impacted by the June core update as we've seen a traffic drop from the June 1st week. Agreed that the update specifies there are no fixes, no major changes that need to be made to lower the impact. But for a publisher whose core area is content news, doesn't signal that it's probably the content, the quality, or the quantity which triggered Google's algorithm to lower down the quality signal of the content being put up on the website, which could have led to a drop of traffic. We're aware that many publisher sites have been impacted in such a scenario. It would really help if Google could come out and share some advice to webmasters and websites, not sites specific, but category or vertical specific, at least, on how to take corrective measures and actions to mitigate the impact of core updates. It would go a long way in helping websites who are now clueless as to what impacted them. I've heard this a few times. I think it's a bit tricky because we're not focusing on something very specific where we'd say, like for example, when we rolled out the speed update, that was something where we could talk about specifically this is how we're using mobile speed and this is how it affects your website. Therefore, you should focus on speed as well. With a lot of the relevance updates, a lot of the quality updates, the core updates that we make, there is no specific thing where we'd be able to say, you did this and you should have done that. And therefore, we're showing things differently. Sometimes the web just evolves. Sometimes what users expect evolves. And similarly, sometimes our algorithms are the way that we try to determine relevance. They evolve as well. And with that, like you mentioned, you've probably seen the tweets from Search Liaison. There's often nothing explicit that you can do to kind of change that. What we do have is an older blog post from Amit Shingal, which covers a lot of questions that you can ask yourself about the quality of your website. That's something I'd always recommend going through. That's something that I would also go through with people who are not associated with your website. So often, you as a side owner, you have an intimate relationship with your website. You know exactly that it's perfect. But someone who's not associated with your website, they might look at your website and come there to other websites and say, well, I don't know if I can really trust your website because it looks outdated. Or because I don't know who these people are who are writing about things. All of these things play a small role. And it's not so much that there's any technical thing that you can change in the line of HTML or server setting. It's more about the overall picture where users nowadays would look at it and say, well, I don't know if this is as relevant as it used to be because these vague things that I might be thinking about. So that's where I'd really try to get people who are not associated with your website to give you feedback on that. Sometimes you can also do that through the Webmaster Health forums, either the ones from us or lots of communities out there with other webmasters, where you can talk with other people who've seen a lot of websites and who can look at your websites and say, well, I don't know. The layout looks outdated. Or the authors are people that nobody knows. Or you have stock photo images instead of author photos. Why do you have that? All of these things are not explicit elements that our algorithms would be trying to pinpoint, but rather things that combine to create a bigger picture. So that's the direction I take there. I know a lot of people have been asking for more advice, more specific advice. So maybe there's something that we can put together. We'll see what we can do internally to put out a newer version of a blog post or provide some more general information about some of the changes that we've been thinking about there. Let's see. Another question from the chat. During the month of April, I decided to change the URL structure of one of my websites. And ever since, we've seen a significant drop in organic traffic in Search Console. However, Google Analytics shows a lot more organic clicks. Where could this discrepancy be coming from? Is it possible that the change of URLs messed up the way that Search Console tracks clicks and impressions? So Search Console's clicks and impressions are based on the search results that were actually shown. So that's not something that would get confused if you make a change within your website. What can happen if you change the URL structure of your website is that it takes us a bit of time in search, in general, to understand that better. So offhand, my suspicion is, if you're seeing a significant discrepancy between Search Console and Google Analytics with regards to organic traffic, that perhaps you're tracking Google Analytics somewhere incorrectly. Because with Google Analytics, depending on how you have that set up, you might be tracking things twice. When a user opens a page, you might be tracking other things in different ways. You might be tracking other kinds of traffic as organic when it shouldn't be or when it kind of splits across different sessions. So that's kind of the direction I would take there. My suspicion is that if Search Console is showing you that things have dropped after you've made a significant restructuring of your website, then that would be in line with what I would expect when you make a significant restructuring of your website. And these kind of restructurings, especially within a website, they do take a bit of time to settle down again. So my guess is when Search settles out and things are kind of in a stable area when it comes to Search, then you should be able to compare the Search Console and the Analytics numbers that you're seeing. And they should be roughly the same. They won't be exactly the same because they track things in very different ways. But they should be kind of roughly, roughly similar. Another thing that might also be playing a role here is traffic from the Discover feed, which is a bit tricky to track in Google Analytics. And I think we show it separately in Search Console now. So that might be something to also look at. Maybe you're seeing a lot of clicks or impressions from there, and you don't see those in the performance report because they're kind of tracked separately in Search Console. John, could I ask a question for you? Sure. First of all, thanks for your time. So from what I've read, Google can handle text-based information within Accordions completely. But a client of ours within the streaming business, when they have their seasons behind Accordions, when just searching for the show, they're number one. But when searching for the show and the season, they're nowhere to be found. So could you clarify things here for me? I will be great. It's hard to say how you might have set that up. So there are different ways that you can implement these kind of Accordions on a site. If it's done in a way that you're using just CSS to hide something that's already loaded, then we would be able to index that. Because we crawl and render the page, and then the content is there. It's just not visible. Then we would be able to index that. That would rank essentially the same as normal visible content. What might happen there is that we wouldn't show it in the snippet directly, but it would still rank. On the other hand, if it's an Accordion where you click on the top level and then it does a quick server side request and pulls in the content and then shows that, that's not something that we would know about at all. Because Googlebot wouldn't click on different elements to see what else gets loaded on the page. So that's kind of from the technical implementations I double-checked that, especially if you're saying that they don't rank at all. If they were ranking a little bit lower, then we'd know about the content. And it's more a matter of, well, they're just not ranking as well for that specific phrase. But if they're not ranking at all, then that sounds almost like a technical thing that we can't recognize the content at all. Thank you so much. Sure. Hello, young man. Sure. You mentioned about guest posting, and I'm interested. Of course, I understand that it's a blackhead technique. But for example, we are going to provide an English version of a website about SEO and nobody knows us because it will be a new website. And there are a lot of recommendations that you can write, like guest posts to other websites, but high-quality. For example, like I create content for my website, and I can create the same level of content, high-quality content to other websites just to make brand awareness. And of course, to get links to my website. You think it's a not good idea just to use other outreach techniques now or, for example, if I create high-quality content. What do you think? It helps or not? You know, Google ignores the slings. I would be really careful. So in the past, we have said you should not use guest posts for link building. So that's something where, kind of as a starting point, our guidelines are pretty clear. And you should not be essentially publishing content on other people's websites and including links to your website back, because then you're kind of putting that link on other people's website. It's not that the other website is organically recommending you. It's you going out and putting that link on the other person's website. So that's kind of our starting point there. If you're talking about brand awareness, then obviously that's not a problem. You can just use nofollow for those links. Then people will recognize that you're an expert on this topic, and they'll see like, oh, this is a new brand or a new website that I don't know about. They'll click on those links. They'll go to your website. They'll learn more about you there. That's perfectly fine. If these links are nofollowed, then you're kind of going around all of these difficult discussions around is this person dropping links on other people's websites. So that's kind of the, I'd say, the ideal situation for kind of growing awareness of your content, of your website. I suspect when it comes to SEO sites in general, it'll be hard, because there's so many people who are saying, well, I am an SEO expert, and I have all of this content. And trust me, so that'll probably be hard. The other thing to think about when you're doing guest posts like this is maybe there are ways that you can do it that you can do a kind of a guest post on other people's sites. You have those links with a nofollow there. But at the same time, these other sites are saying, well, look at this awesome content that this person wrote. And they do organically link to you, because they've seen kind of the content that you've written for them directly. So from my point of view, I wouldn't say that if you're doing guest posts, then you will be penalized. And our algorithms will never see you favorably again. But it does make it a lot harder. And in particular, if we look at a website and see all of the links for this website are guest posts, then that's kind of an obvious situation where anyone from the manual website team looking at that will say, well, none of these links are actual recommendations for the website. Therefore, we should really be careful with how we see those links in general. On the other hand, if we see that there's lots of normal organic links pointing to this website and their occasional guest posts where you spread some additional knowledge on other people's websites, then that's usually something where from a manual website point of view, they'll look at that and say, well, I can see they're trying to do some things. But overall, they're good guys. And then they're watching out to do the right thing. And then we don't really need to do anything with that. So yeah, I mean, I don't want people to go off and say, like, well, John says you should do guest posts, because that's definitely not what I'm saying. And going out and just dropping links on other people's sites is not the way to build links. And it will result over time in the website team noticing that and our algorithms picking that up and saying, well, all of the links to this blog over here are basically terrible, weird guest posts. We should ignore all links to this blog, because that's also not what you want. But so I primarily see it as a way of building brand awareness. And if you have those links no followed, that doesn't matter for brand awareness. And over time, that builds up your reputation too. OK, additional question about redirred links. One time I read on Reddit, I think, yeah, you replied to one guy about that Google doesn't count these links, redirred links. And especially if you bought some, I don't know, other expired domains and try to manipulate the results. But I'm interested. For example, I have URL structure. And I want to change this URL structure in my website. And I want to, you know, and for example, I have old content. And I want to refresh this content to other pages. It means Google doesn't count these links, what they get to old pages or old structure or not. Redirects within your website are perfectly fine. I think perhaps the thread that you're thinking about is about people buying expired domains and then just redirecting them to their website. That's something that's a really old school technique. And that's something that we work really hard on recognizing and ignoring. But if you're redirecting within your website, if you're restructuring, if you're moving from one domain to another domain, that's where redirects come into play. That's how they should be used. That's kind of what we do. All right. Let me run through some of the submitted questions. So much still left. Working with a client, a food blogger who has some AMP pages created but still has the original non-AMP version of the same recipe available as well, is this counted as duplicate content or not? So if the AMP page is connected to the non-AMP page, kind of with the link rel alternate link rel AMP HTML, I think it's called, and the canonical back to the web page, then that is not duplicate content. Then those are essentially connected AMP pages. We would primarily index the HTML version and then swap out the AMP version as needed for mobile users when we show that in the search results. So that's seen as one page. On the other hand, you can have separate AMP pages as well. You can make pages AMP-only. And then those would be indexed individually. And in that case, if you have one HTML page that shows a recipe and one AMP-only page that shows the same recipe, then that would be considered duplicate content. We wouldn't demote your website because of that. We wouldn't say that your website is bad because of that. Sometimes it just happens for technical reasons. But you're kind of competing with yourself in a situation like that because you have two different pages that are targeting the same keywords, that are targeting kind of the same user. So my recommendation there would be if you recognize that you have multiple pages with the same content or with mostly the same content, pick one and focus on that one. Fewer pages make things easier from a technical point of view. And they help us to concentrate the value in those pages so that they rank better as well. AMP is a great way to make web pages. So maybe that's a good approach there. On the other hand, maybe you're saying that my non-AMP version is also really fast and awesome. And I prefer using my non-AMP version because it's easier to create or for whatever other reason, that's fine too. Just switching to AMP itself is not a ranking factor. So you don't need to kind of artificially say, like, wow, Google is saying I should do everything on AMP. Therefore, I will delete my HTML pages. Maybe the HTML version is the one that you want to continue using. Maybe it's the AMP version. XML sitemap errors, it's unclear what effect an XML sitemap error has if an individual URL has an error. Is the entire sitemap ignored? Any content after that URL ignored? Or is it just that specific URL? It's just that specific URL. So if we can parse the XML file properly and it's just one of those elements that is broken internally within that element, then we can skip that element and we can parse the rest of the XML file. On the other hand, if that element is broken in a way that the rest of the XML file can't be parsed at all, so maybe you have an open bracket and you forget to close that, for example, then the rest of the XML file is kind of unreadable. And in that case, of course, the rest of the XML file would not be usable as a sitemap. But if it's more like a logical error that you have within one of these URL elements, then that's within that element that doesn't affect anything else on the sitemap file. Does Google use rel equals me for crawling, indexing, and ranking? I don't think we use that at all. So it might be that that comes into play with some kind of structured data when we try to understand the entities in a more kind of specific way. But in general, I don't think you would see that affecting crawling, indexing, or ranking. In the future, will access to the old Webmaster tools be removed or not? Will the search appearance section be moved to the new appearance, to the new version? At some point, we're going to have to turn off the old Webmaster tools. Yes. So that will happen. I don't know when that time will be. I vaguely guessing that towards end of the year we'll probably have to make that cut. However, we are trying to move as much as possible to the new Search Console. So that's hopefully happening. All of the features that we see people still using are planned to be moved over. What might happen, though, is that, or what ideally would happen even, is that we rethink a lot of these features and we don't just blindly move them one to one, but rather we think about what are people actually trying to do with this feature? And how can we help them to do it more efficiently? So that's something that will affect some of these features in the sense that they won't be moved one to one exactly as they are, but rather we'll try to rethink them and consider what can we change within our features to make it easier for people to use. There might even be features where we say, well, people really love this feature, but it has absolutely no effect on most websites. Therefore, maybe we shouldn't be moving it over. Maybe we should be dropping it and kind of giving people a clear signal to let them know that actually they don't need to spend time on this specific feature. I don't have anything in particular in mind there with regards to things that we won't be moving over, but that's certainly a possibility. If there's something explicitly that you're missing in the new Search Console and you're saying, I really want this or I really want you to prioritize moving over this feature because I hate switching over to the old version, then use the feedback link in Search Console and let us know about that. And ideally, don't just go in there and say, I want this feature, but rather go in there and say, I want this feature because I need to do x, y, and z. And this feature lets me do that in an efficient way because that helps us to figure out what part of the feature is really critical and which parts we have to really take care to make sure that the new version is really as fantastic as it can be. In the near future, will the disavow tool be able to process and execute all disavow domains in a few minutes? I don't see that happening. I think the disavow tool is one of those things that so few websites need to use that we'll probably just be moving that over as it is. And we probably won't be setting it up in a way that prioritizes recrawling of all websites that were linked to that website. I don't see that happening. Is there any chance to rank a 90%, 95% identical service website under one domain locally at x locations without getting into a canonical issue? So different vendors that have the same services, I think the direction that you're going into there. I mean, sure, we can index this if these are different websites or different versions of the same content. But you will, as I mentioned with some of the other questions, you will run into the situation that you're competing with yourself. And if you end up competing with yourself across x different locations, which might be, I don't know, 100 or 1,000 different locations, then you're really diluting the value of your content across all of these different versions, which makes it a lot harder for any of these versions to rank in the overall competitive landscape. So from that point of view, you can do this. Nothing will stop you from doing this. And these pages will probably be indexed in some way. But will they really do what you want them to do for your business? I doubt that. So that's another case where I'd say fewer URLs probably would work better than just duplicating things across a ton of different URLs. Does a do follow link to a no index page lose page rank? By lose, I mean, the page rank that flows to the other links in the page is smaller when it is disputed by more links. The same question for a no follow page. So a do follow link is just a link that doesn't have a no follow. From a practical point of view, we see these links as links as they are. When we calculate page rank, I'm sure our system is taking into account some of that, how that gets distributed further from there. But I don't think from a practical point of view it makes any sense to focus on this specifically. So from that point of view, I don't think it really makes sense to worry about whether or not a link to a no index page is worse or better than a link to an indexable page. If people are linking to pages that are not findable in search on your website, then that might be a sign that perhaps these pages should be indexed. Otherwise, people wouldn't be recommending them. But that's more kind of a tactical thing from SEO point of views. Like, are you providing the right content for indexing? Or are you blocking things that you kind of could be making indexable? How often are the links updated in Search Console? I don't know. I assume it's something like once or twice a week. As far as I know, most of the data in Search Console is updated about that frequency. I don't know if anything in particular is different with regards to links. Is it better to use WordPress instead of having your own website software and not regarding special tags but regarding to the HTML structure for Googlebot? No. You can use whatever platform you want. HTML pages are HTML pages. You can take the output of WordPress and copy that into a text editor and save that as an HTML file. And it'll be the same from our point of view. WordPress is definitely a way to make websites in an easy way in a way that kind of by default just works well for Search. But there are lots of different ways that you can make websites. And that includes making them with your own website software. I think one of the advantages of using a CMS like WordPress or using any of the hosted platforms is that you have a lot fewer things to worry about in the sense of you don't have to worry about providing the way the HTML is served. You don't have to think about security as much because you can focus on the existing mechanisms that are well known for the CMSs. Especially if you're using a hosted platform, then security is a lot less of an issue. Scaling is a lot less of an issue. So that's kind of the advantages there or more from a practical point of view that you have to do less to get your content out there. Sometimes it still makes sense to roll your own website software if you're especially if you're doing it kind of out of a hobby, then it's a lot of fun to run your own server. But you don't necessarily need to do that. There's no SEO advantage of using WordPress over any other CMS over doing it yourself. Oh, wow, man, we're running out of time and still so many questions. OK, let me see if I can run through some of these a little bit quicker. And then maybe we'll still have a little bit of time for more questions from you all. And of course, the first one is a really long question about international websites. I will skip that because we're kind of out of time. But if for longer questions like this, I'd recommend checking in with the Webmaster Help Forum because they can look at some of the examples that you have there and give you a little bit more specific advice. Does Google prioritize above-the-fold content over the rest of the page content when ranking a page? Not necessarily, no. Another FAQ markup question, should FAQ pages have the answers expanded or come using accordion style where the answer is expand when clicked on? Yes, you can use an accordion style. The important part there is that the question is at least visible by default. One of our websites got hit by an unnatural links penalty. We removed many links. We created an Excel file and disavowed and removed. Still, our reconsideration request failed a few times. I feel like even if we disavowed all of our links, it'll be denied. And nobody's actually looking at it. What's up with this? So people do manually look at these links and they do look at whether or not you're really significantly covering the links that you should be kind of focusing on. So it sounds to me like you're seeing some links there and you're cleaning those up, but maybe you're missing out on some other things that you should also be focusing on. Oftentimes, when reconsideration requests fail, they'll give you some sample URLs to think about as well. So I double-check for that. If you're really unsure, I would recommend posting in one of the webmaster help forums so that people can take a look at things that you've disavowed and kind of point you in the direction of maybe some other things that you weren't thinking about. A spammy title tag, including just keywords, works better for our rankings rather than a natural title. What should we do? You can do it either way. It's not that we're saying your website will be penalized for having a spammy title tag. However, a title is a really obvious thing in the search results. So just because you have kind of spammy title tag that ranks well, assuming that's actually the case here, it doesn't mean that more people will be clicking on your link because maybe they don't understand what this page is about, or maybe they go, oh, this guy is just trying to spam. I'll click on the number two result instead. So I would primarily see the title as a way of kind of encouraging people to click on your pages and saying, well, this is the content that I have, and this is why it's relevant to your specific interests at the moment. So I'd use the title more in that direction rather than purely as a ranking element because there are lots of ways that you can improve your ranking. We experience a big drop in traffic across two sites simultaneously in April, have not been able to resolve that. It was around the de-indexing bug. Are there any factors that could cause such a big drop outside of a major update? It's hard to know where to start looking. I don't know which sites these are, so that's a good point. Hi, John, this is my share today. Oh, OK, fantastic. Someone's here. Cool. So that's kind of hard to say. So offhand, the de-indexing bug that we had there in April, that wouldn't be affecting it because that's something that is completely resolved. So that sounds more like a general change in ranking overall. Yes, when you have something where two sites have dropped in the same hour on the same day, would you be looking at some sort of commonality between the two? Is it what's the chance of that being content-related? So obviously, content improvements you can always make. And when it's two, they are using the same code base, but two completely different subject matters. It's completely different areas. Where would you start looking, really? I guess what I would try to figure out is kind of get rid of the most obvious things and kind of narrow things down a little bit from there. So double-check the manual actions. I assume you check that. Double-check for technical issues. So in particular, find pages on your website. Are they indexed? Are they not indexed? Is the canonical kind of correct? Or are they being duplicated, eliminated, to get one against the other? Especially when you're saying they're on the same code base, then sometimes that can happen that pages on one website happen to be indexed as a canonical on the other website. And the other website says, actually, these pages are no index, and then we pick a canonical that is no index. So kind of narrow things down as much as you can to get rid of the obvious technical questions. Because if it's a technical issue, then that's a lot easier to fix. It's a lot easier for you to test. And in the end, if you figure out it's not a manual action, it's not a technical thing, then it's really something where the overall website doesn't really match what our algorithms are expecting anymore. So that's kind of the point where it gets hard, that for sure. But where you might need to take a step back and rethink how can we provide our website in a way that works well for the modern web user, what kind of things might people be missing on our website? What ways are we perhaps presenting ourselves in a way that is detrimental to the way that users might look at these pages? So for example, if these pages have a ton of ads on them, then that might be something where users, when they go to these pages, are like, oh, where's the content? And that could be reflected in our algorithms over time. But yeah, narrowing that down is really tricky. Are there any rolling algorithms that, from a securities perspective, or that could trip on a certain day like that? It can happen that the algorithms that we roll out have such an immediate effect across a number of sites. That can certainly happen. If you want, you can also drop a link to your forum thread, maybe here in the chat. And I can give you a comment, actually. Oh, cool. OK. I think, yeah. Cool. Ah, OK. Yeah. Oh, you have two questions. Yeah, so one took ages to come up because I had the link in. All right. Cool. Thank you. I'll take a look at that afterwards. We had a manual review two months ago. We used a bad WordPress plugin. We fixed the problem. Google removed the manual action. The traffic came back. But after one week, we lost all of our rankings. In Search Console, there's no manual action. We asked the forum, didn't get any advice. We have a high score on the page speed insights. The site has about 80 posts of original content. We removed all outgoing links. We disavowed all backlinks. We removed all schema markup. Nothing changed. So on the one hand, it sounds like you're doing some pretty drastic changes, probably a lot more than you'd need to do. On the other hand, some of these changes seem very technical in nature and might not be related to the overall view of your website. So I would assume that this isn't really related to the WordPress plugin. It's not related to the manual action because if that's resolved, then that's resolved. It's not that our algorithms would hold a grudge on that. It sounds like from a speed point of view, you're doing OK, but speed doesn't mean that a website will rank automatically. It still has to be relevant. It has to match what our algorithms expect there. Removing all outgoing links, I think that's probably too much. I don't see that helping a website ever. I would recommend making sure that you do have your outgoing links there to make it easier for people to find more context about the information that you provide. Disavowing all backlinks is also an extreme step because if you're disavowing all links, then none of those links are helping your site at all, which makes it even harder for us to rank your pages. So my recommendation there would be to kind of step back from some of these extreme changes that you've made, like removing the outlinks and disavowing all backlinks, and instead, to focus more on the website overall and think about where might there be quality changes that you could work on, what might be some directions that you could take the website overall to help improve its quality overall. If you're talking about 80 posts of original content, then that's kind of a limited scope of things that you'd have to look through. I would also keep in mind that when you're looking at the quality of a website, it's not just the textual content of these posts that matters. It's really the whole website overall that affects how users perceive your website. So kind of trying to take a look at the bigger picture there. Let's see. Structured markup question with the new how to markup. How does this interact with recipes if you have a pangalactic gargle blaster page? Should we use both? Or do recipe and how to target different types of searches? Also, is it worth adding speakable markup to the steps? I think you can probably do all of that. I don't think we would show all of these at the same time, but we would probably pick one of those and show it depending on how people search. Also, speakable markup, of course, is specific to kind of a voice device. So that wouldn't be affecting how it's shown in the visible search results. Let me see. Can we do SEO using Blogger? If so, how? Yes, you can. You do SEO using Blogger. Lots of websites use Blogger, and they work well in search. So that should be fine. Kind of similar to WordPress, it's a way of making web pages, and you can make web pages in lots of different ways. I bet if I refresh this page, there'll be a ton more questions. But we're kind of over time already, so maybe I'll just open up to any of you if there's any things still on your mind that maybe I can help with. Nothing. Hello, John. Hi. Yeah, I have one question. Like, I have a website named sttps.ww.example.com. Then I have moved my own content in sttps.pollonexample.com. Then in that case, I have to create new Webmaster, or I have to go with existing Webmaster. So I think you mean the Search Console account? Yes, Search Console account. Yeah. So there are two ways now that you can do that in Search Console. On the one hand, it's per host name. That's kind of the traditional way to do it with a meta tag or something. And for that, HTTP and dub, dub, dub, in the beginning, they do play a role. So you have to pick the right version. You can also do both of them if you want, if you're unsure where the data is. In that case, I have to do redirection from WWW to non-WW. If you want, sure. You can pick one of those. That's a best practice. We recommend doing that. By setting up the redirect from one version to the other, you can help us to focus more on the version that you pick. So that makes it easier for you. It makes it easier to track. That's usually what we recommend. The other thing you can do in Search Console is if you verify with the DNS, with the domain name, then you can verify the whole domain. And that includes all of the version with HTTPS and not HTTPS and dub, dub, and non-dub, dub, dub. It's all included in the same account. So those are the two ways that you can do that. OK, thank you. Sure. Last question, John. OK. I would like to know if you have any news about the question, why some 404 on Google Search Console are on 404 report and others are on crawl anomaly. I asked the team, and they didn't have an answer for me offhand. So you found a tricky question. I think the crawl anomaly report is generally one that seems to confuse people because there are a few things that come together there. So maybe we can make that a little bit clearer in the future. But I'm still kind of pushing for an answer for that particular part. Because I looked at my websites, and I recently removed an AMP plugin from my blog. And half of these old AMP pages are as 404, and half of them are as crawl anomaly. And I'm like, why are they same kinds of pages listed in different ways? But hopefully, we can get that figured out. I switched to kind of a pure AMP setup, I think, for my blogs to try things out. So that's why these old AMP pages show up as 404. But we'll see. OK. Thank you. Cool. OK. With that, let's take a break here. Thank you all for coming. Thanks for submitting so many questions. I'll set up the next batch of Hangouts probably later today. So if there is anything on your mind that you still want to have covered, feel free to add that there. Or as always, feel free to drop by our Webmaster Help Forum, where tons of people who have worked on similar issues are often able to help out as well. And finally, on Twitter, we're doing a thing called Ask Google Webmasters, where if you include that hashtag, we'll try to include that in one of the future short videos if there's something not site-specific that you'd like to ask about. All right. Great. So thanks again. And I wish you all a fantastic weekend. Thank you. Bye, everyone. Bye. Bye-bye.