 All right, welcome everyone to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a Webmaster Trends Analyst at Google here in Switzerland. And part of what we do are these Office Hours Hangouts. Where publishers like the ones here are welcome to join in and ask any questions around web search and their websites. We can try to find answers for issues that have shown up. As always, if any of you want to get started with the first question, feel free to jump on in now. Hello, John. Hi. How are you? Pretty good. How are you? I'm good. John, I have some question. Recently, we have some issue in some project. And all the questions are related to this project. So one question is one of our clients launching a website. It is almost a website. They sell spa products. Now, the products they are selling, most of the product have description. Now, 50% description are actually product specification. And they are getting the product specification from actual manufacturer website. So it is almost the 50% of the content are almost the same. And another issue is that manufacturer are Australian websites. So both are Australian websites. So both are operating in Australia. So will it be an issue for a perspective? No, that shouldn't be a problem. So what happens there is when we recognize that exactly the same content is on two of these websites, then we'll try to find the most matching one to show in the search results. So if someone is looking for something general that is only in the description which is shared across these two, we'll try to find the best match that works for them there. OK, so that will be a duplicate content issue. I mean, technically, it's duplicate content, but it's not like we would demote or rank a website lower because it has content like that. That's completely normal. With a lot of products when they're sold worldwide, you end up having the same description on lots of places. I think it makes sense to have a unique description or at least have unique content around the whole thing. So that could be something like reviews that either you do as an expert or that other people do for the product. That could be additional information about why your location is the best place to buy this, which might be as simple as, well, located the same country as you are. That all of those things kind of help. And the next question is one of our clients, they provide training service around Australia, various locations in Australia. But what they did, they created three pages, one for the house cleaning, one for the home cleaners, one for domestic cleaning. Now, all of them are almost the same thing. And the content on the page are almost the same. Now, at the beginning, it works fine. We trained for the 3Q words perfectly. But now, their ranking is dropping. Is it because they have three pages which have almost same content and same topic? I think what probably happens there fairly quickly is that it ends up looking like doorway pages, that it's just like swapping out the location and all of the rest of the page is actually the same. And what usually works better there is to have one really strong page that kind of gives a general information rather than to dilute it across a lot of different pages. So if we remove the two pages and keep one page and redirect the two pages to that one page, will it work better? OK. Probably, probably. OK. What about if we instead of removing those pages, if we use canonical tags, will it work? It would be pretty much the same as a redirect for us. So in search, we would try to focus on one URL. For users, both of those URLs would still be there. So for example, if you want to use the URLs for advertising, then maybe it makes sense to keep both of these. If you want to make sure that everything is really combined on one URL, then I would just redirect. OK. Thank you. All right. One more question before we add into the deep list. Yes. So this is Michael LeWittis from Gossip Cop. And I think we spoke maybe about a year ago, and we redesigned the site. We worked on the content. I mean, we have been working 24-7. And in the last few months, very happy with how everything was going until about Tuesday night, started seeing some weirdness. And then today, searches on basic celebrities with whom we have generally rank, we were seeing very absurd sort of results. I mean, one of the classic examples was if you put in Ben Affleck's name today, and top searches was a story about his brother's beard, a story about Rose McGowan, and one about JK Simmons, who wants to be in the next Batman film. There was a thing with Angelina Jolie, which was a sort of Ukrainian site pretending to be an American site, which had LeapFrog People magazine, which is one of our biggest magazines in Alice. So there seemed like some weirdness. But again, we were wondering, maybe we did something, because a lot of our traffic has been, thankfully, AMP. And none of our traffic today was on any news stories of which there were about 12 or so. It was all 24 to 48 hours old. And we're wondering, I know you can't really speak too much about the algorithm if there's a tweak, but nothing we've done has changed. If anything, we keep doing, I think, by the day, better work, frankly. So I don't know what specifically changed there. It's always possible that there's some algorithmic changes happening, that maybe there's something from the web spam side as well that might be kicking in or falling out, where suddenly all of these weird sites are showing up again. But one thing you mentioned there with regards to the content, where you said that it was the older content that was showing up, are you seeing or not picking up the new content? The first 25 or so stories in analytics were really 24 to 48 hours old, very little traffic, because it wasn't seen, frankly, from the new stuff yesterday. OK. So I don't know. That sounds like it might be something of a technical issue as well. I'm happy to take a look at that. But what you could do is maybe send me a question. Sounds like it might be something of a technical issue as well. I'm happy to answer that. I lost him for a second. Yeah. So what might be useful is if you could send me a message with your domain, obviously, so I can double check with the crawling side to see that things are being crawled properly. And some of those sample searches that you said were looking really weird. So for us, what's really helpful is especially that the more generic searches, like you mentioned on people's names, where you're seeing really, really bad results in the top search results, then that's really useful for us to kind of pass on to the team and say, hey, these results are really bad. We need to fix that regardless of whether it's like your site that's up or one of the other sites. So that's kind of really useful for us there. But in general, I'm not aware of anything kind of like celebrity or news focus that would be happening there. Parmai Naivite, how do I get in touch with you? Oh, you can send me a note on Google+. Google+, OK. So you can probably just send me like a private note there. Sometimes it's a bit tricky on Google+, to send private messages. But you can just like add me there and we can do that. Thank you. Sure. All right. More comments or questions before we get started. Hi, John. Hi. I have one question about new search console. We have some error about soft 404 errors. When we try to find this page, this page looks normally. It's not have some redirects or 404 status. The headers come 200 status. What can be the problem there because this error is existing? So a soft 404 means that you're actually not returning a 404 code. So not having the 404 there would be a guide for correct. Or well, at least correct in the sense that you're not 404. The most common soft 404s that I've seen are redirects to a shared page. Like you remove a page from your website and you redirect that page to your home page. That's often a sign for us that it's actually a soft 404, not that you've moved this detailed page to the home page, that kind of thing. The other is one page with a 200 result code. That's also a very common use case where we look at that and say, oh, actually, no content here that we should index. Sometimes it's as simple as an e-commerce page that has a product listing that says, this product is not available. Or a search result page that you have on your site that says there are no results found for what we're searching. So all of these things, they kind of combine into the soft 404 group. What sometimes also happens is that we get it wrong. So I see this time, for example, on programming websites where you're talking about error messages and you're saying, well, you can serve a 404 error like this and it will show a page not found and this is the code to kind of do that. And sometimes we look at that and we say, oh, it says this page is not found. So we will treat it as soft 404. And those are the cases that are really useful to us because then we can take a look at that and say, oh, we need to restart. Does any of that match what you were seeing? What? Not really. OK. So what I would do then is either send me a note or maybe do a post in the help forum with the URL that you're seeing and the details that you have there. Then people can generally take a look at that and say, oh, this is because you're doing this. Or they can look at it and say, oh, this is an error and we'll pass it on to you. Actually, this has come from help forum. OK. Yeah. Cool. So if you can send me a link to your forum thread, then I can take a look there too. OK. Thanks. I appreciate it. Thank you. Sure. Thanks. Hi, John. Hi. I have a question. We have created a few millions redirects five years ago for our website because we changed the link, the URL structure, and now using the new Google Search Console, we found those links there. So it's five years old. How we can remove them? Or it's OK to keep them? Does it affect SEO? That shouldn't be any problem. Yeah, that should be completely fine. These things are sometimes in a way that our systems just have a very long memory and they will remember them and try them again. I was scared when I saw five million links there from five years ago. It's a little bit late. So do you mean in the index information or in the links to your site? In index coverage. Index coverage. Are these URLs maybe blocked by robots text? Is that possible? No, no, no. 301, where it leads. Yeah. Sometimes it just takes a while to be reprocessed. So what you could also do, or what might be happening there, is that we're actually indexing the redirection source instead of the target URL. Exactly. Yeah. That can always happen. So for canonicalization, where we pick which URL to actually show, we take redirects into account, but we take a lot of the other canonicalization factors into account as well. So if you have a rel canonical on those pages, that helps us. If you have the target URL in your sitemap file, that helps us. All of these things kind of add up. And we look at it and say, oh, there's a redirect here. But actually, there's a rel canonical here. And internal linking is here as well. So we will keep this URL index instead of the other one. And that can happen sometimes across a large part of a website. So what you could do, if you feel really strongly about this, you can make sure that all of the other signals really combine and say, this is the URL that should be indexed. If you don't care that strongly about it, then you can just leave it. So that shouldn't have any effect on SEO at all. It's just a matter of which URL is actually shown. And maybe it's this one. Maybe it's that one. Thank you. Sure. All right. Let me jump through some of the questions that were submitted. Lots of stuff here as well. Google explained that content hidden for UX purposes will be given full weight when sites are switched over to Mobile First Index. How will Google deal with abuse? Yeah. So we have to deal with this anyway. So it's not something that's completely new. Sometimes sites do weird stuff. Sometimes on purpose, sometimes on accident. In general, what we try to do in these kind of cases is to have algorithms that recognize kind of the problematic situation and to try to ignore that. So that sites that do this accidentally, they don't have a big problem with that. Sites that do that on purpose don't have any advantage from doing this kind of sneaky stuff. And usually, that works fairly well. Sometimes we run into problems where we're always happy to get feedback. But for the most part, especially things like the keyword stuffing that's often mentioned in this regard, that's something we can deal with fairly well algorithmically. Then Glenn asks about links in PDFs. I double checked on this, but I haven't heard back. So we'll have to get to this again in one of the next hangouts. We know that every page index is taken into account when Google evaluates quality. But if an affiliate site is disavowing a directory or disallowing a directory that contains many redirects to e-commerce sites, and many of the URLs in that directory get indexed, is that a problem quality-wise? Should we de-index those? What should we do? So if this is content that is blocked by robots text, then we don't take that content into account for the website because we can't see that content. So essentially, that's fine to keep it like that. Usually, especially with affiliate-based websites, the problem is more with the rest of the content on the website. And that it does need to have some kind of unique and compelling value, some reason for us to actually show it in the search results. So usually, it's not so much that a website is affiliate-based, but rather that the rest of the website is just very thin and not very useful. So that's usually what I would focus on there, not so much the fact that it's affiliate-based and you're trying to hide kind of this affiliate connection or whatever you're doing with those redirects that you're blocking by robots text. How to deal with pagination when content on page 2, 3, and 4 is largely a duplicate of page 1. More specifically, the content above the fold on page 2 is the same content as page 1. Is rel previous next enough? Should we do more? That's a perfectly fine use case for rel previous next. From our point of view, when we look at pages, we try to split them up into different parts. And we try to recognize which part of a page is the so-called boilerplate. The boilerplate is something that generally stays the same across a lot of pages. And by recognizing the boilerplate on a set of pages, we can also recognize which part of a page is actually relevant. And that way, we can focus on the content that's actually changing on these pages. So from that point of view, if you have paginated series, then that should just kind of work out. I think from a user's point of view, it sounds like you're probably a bit extreme with the amount of boilerplate you have on these pages if you say that everything above the fold is actually the same. Then if I'm thinking of a traditional e-commerce type website where you have some information about the shop on top, and then you have this paginated set of items in a category page, if you can't see those items at all on a category page, then that feels like it might be a bit confusing to users. But essentially, that's an indirect effect. That's something that you kind of have to deal with on your own and think about, how does this affect my conversion rate? Am I really getting the most value out of the users that I am getting? Our site maps I must have. I mean, what about the non-tech savvy people who run websites that don't really know how to put up a site map in the right way? Site maps from our point of view are really useful, especially for larger websites that change, that have a lot of new and modified content over time. For a lot of the smaller websites that essentially mostly have the same content, they're not so critical. So I still always recommend having a site map file because it's a good, good best practice. It's something that also brings the whole kind of mental model of how a search engine works into something more practical in the sense that you have to think about which URLs on your website are actually the ones that you want to have indexed. And if you're using a tool to create a site map file, then which is kind of what I recommend doing. Otherwise, it's a big hassle. If you're using a tool, then you also see how other tools crawl through your website to discover URLs on your website. So it's a great way to kind of understand a little bit better how a search engine works and to provide real information about what you think is actually relevant and useful on your website. For a lot of content management systems, if you're making a website using some existing platform or some existing tools, then pretty much all of them support site map files by default in the sense that you pretty much just have to do a check box or at most install a plug-in. And then you automatically have site map files without even knowing what the details are of the site map files. So that's something where if you're non-technical and you're using something like WordPress or Blogger or any of the other kind of CMS and hosting systems out there, then it's often a matter of just doing that check box or maybe it's even installed by default and providing the site map file for you. I'd love to know about recommendations for foreign language characters and special characters and URLs. Does Google have a preference? That's a great question. That comes up a lot of times, I've noticed. From our point of view, you can use either your kind of local language URLs or you can use Latin letters and URLs, both of those work for us. We don't have a preference. It's mostly an identifier for a website. So you use what makes sense. If you're using these in the domain name, then obviously they're in puny code in the end. So you kind of have to watch out for how that's actually set up. That's usually with the XP and then a dash and then some weird characters in between. You can kind of see what that looks like. If these are characters in the path or in the query part of the URL, then they just have to be in UTI of 8. And that'll just work. From our point of view, from a search point of view, we do use some content from URLs for ranking, but it's very minimal. So I would mostly focus on what you think makes sense for your users and use those. So one thing that I have noticed sometimes is that websites sometimes use English words in URLs because they think that's what search engines understand best. But the content is actually written for non-English speakers. And sometimes that can be confusing for people. If they see URL in the search results that's in English, clearly in English, but actually the content is not in English, then they kind of don't know, is this a website that's actually speaking my language or is this a website that's essentially in English? So if possible, I'd really recommend using local words, the right language, the right characters in your URLs. Is it OK to use a different nav bar or component with duplicate links for mobile users and hide it for desktop users with display none? Sure, that's completely normal. A lot of responsive sites do that by default, essentially. They have different navigational elements depending on the size of the viewport. So if it's on a mobile or not, or maybe in the wide mode or in the tall mode, then it might use different navigational elements on a page, and that's completely normal. That's usually a pretty good practice. Is there a way to check if a site is blacklisted or in a sandbox temporal status? So blacklisted would mean that there's a manual action on the site, and manual actions are visible in Search Console. So that's where I would double check. We have an extra section for manual actions in Search Console that gives you information on what is currently happening with that website. With regards to sandbox, we don't really have this traditional sandbox that a lot of SEOs used to be talking about in the years past. We have a number of algorithms that might look similar, but these are essentially just algorithms trying to understand how this website fits in with the rest of the websites trying to rank for those queries. And sometimes what happens is our algorithms will start at one place and see how that works out and try to get a confirmation through all of the other signals that this is actually working out. And sometimes that results in things going up afterwards, which is kind of like the sandbox that you're talking about here. Sometimes that results in things dropping down a little bit, which is I think the honeymoon period that people sometimes talk about. So it's always kind of tricky in the beginning when we have a new website, and we don't quite know where we should put it. You kind of see some changes in search happening there, because when you have to make some assumptions and figure out, like, does it make sense to put it here? Are we seeing enough support after a while that it's actually the right place, or should we have ranked it higher? Should we have ranked it lower? These things are kind of tricky with new content. How to avoid duplicate content in download websites that have a similar content to other download websites? So I think, first of all, if this is a download website and you're just providing content for download that's available everywhere else, trying to obfuscate that this is the same as everywhere else is probably not really the best use of your time. Probably what the main problem is here is that actually your content is the same as everyone else. So even if you change the words a little bit, it's still the same content. And that's something where probably it would make sense to find something that would allow you to be significantly different than all of the rest, so that when someone is searching for, I don't know, depending on what you have there, if you have, let's say, CSS templates on your website for download, that you really provide a unique and compelling experience that's significantly different than everything else and that you provide really top of the notch absolutely number one content on your website and a great user experience, great functionality. So instead of just doing the same as everyone else and just trying to tweak the words here and there, I'd really think about what you can do to be significantly different. And sometimes it's hard to be significantly different and sometimes it's impossible to be significantly different. If your main model is essentially providing the same downloads as everyone else has, then that's kind of a tricky situation and I don't really see a simple tweak that will make that different. So in a case like that, if you want to focus on search, maybe it makes sense to think about a different kind of website instead. It's been a year now and we have a no index and a redirected URL, but Google still shows the old URL and the search results with no information is available. So I don't really know how the setup is here because if you have a redirect and a no index, you can't have a no index on the page that is redirecting. So it feels like there's some part missing here. What I would recommend here is starting a thread in the Webmaster Help Forum with the details that you have and then other people can double check that and take a look to see if there's something that you're missing. Sometimes or oftentimes what I see happening is that one of these URLs is blocked by robots text and then we can't see the redirect or we can't see the no index, for example. Sometimes what might be happening is similar to the case a while back here where we just have multiple URLs and we just have to pick one of these as canonicals and we pick one as canonical that you don't really want as a canonical. And in cases like that, it's useful to get some tips on what you can do to really tell Google that this is really what I want as a canonical. Will using link shorteners instead of canonical URLs to share my website's pages impact my rankings negatively? No. If we can go all through a URL shortener, if it's a clean redirect, the short URL to the actual URL, the actual URL should just continue to work. Some noise where we just keep reducing amount of background noise a little bit. AMP Removal and M.NewPage instead of AMP in such a case should AMP be redirected to desktop or the M.page. So for the most part, from a theoretical point of view, to redirect the AMP URL because the AMP URL is only associated with one of the other pages. So it's not that there are any signals attached to the AMP URL. The AMP URL already has a canonical set to the desktop page. If you do want to redirect the AMP URL because you see users going to the AMP URL, then I would redirect to the mobile version because usually users that go to the AMP page are probably on mobile. But from an SEO point of view, the AMP page is already canonicalized to one of the other pages. So it's not that you would lose out on any SEO value if you didn't redirect. Links in hidden tab or collapsed content are only followed and PageRank isn't passed or PageRank is also passed. Kind of similar to Glenn's question in the beginning, we do pick up the content in hidden parts of a page and we do try to treat it appropriately. Sometimes that means we treat it like completely normal content. Sometimes that means we treat it kind of like other stuff content that people have on a page and we have to figure out how to handle that. So if something is, in general, if something is important to you, I would make sure that it's visibly important to users as well. Google Small Business made an announcement two and a half years ago saying if your account remains inactive by not logging in, it could run risk of being de-verified or removed. So I guess it goes on with like what will happen with the search results. So I don't know what the Google My Business team does in cases like this. I don't know if that's still relevant. If you're wondering about that, I'd recommend posting in the Google My Business Help Forum. The folks there are generally pretty helpful and can probably point you at some document that says this is what happens or this is what happens. With regards to search, we treat websites independently. It's not that we try to recognize which website belongs to which Google My Business listing and to double check who owns that Google My Business listing and then show the website in search. But rather in organic search results, we rank those websites the way that we rank any other website. So just because a Google My Business listing exists or doesn't exist wouldn't change the organic rankings of that page. This is a bit different when it comes to the local, local one box I believe it's called where you have a map and you have kind of the local listings on the bottom or on the side. Those local listings are based on the Google My Business listings. So if those Google My Business listings don't exist then they wouldn't be shown. Is structured data like a JSON LD a must for a blog or for micro site. Essentially nothing is a must for a blog or micro site. It's probably you could use robots text and block crawling of the whole website completely and it could still show up in the search results. It's obviously not optimal to do that. And similarly structured data does provide a little additional value and little additional information about those pages which helps us to better understand the content in the context of what you have on those pages. So in that regard it's definitely not a must but I think it's a good practice and it's something that helps us to better understand things on your site and sometimes that leads to us being able to rank you for more relevant queries in case that it automatically makes your website rank higher by default. We're currently working on schema markup for our product detail pages breadcrumbs. After reviewing other sites there seem to be a 50-50 split between sites that include the actual detail URL in the breadcrumb and sites that leave stop with a parent URL. Does Google prefer one way or the other? I don't actually know. So I suspect if you're seeing these in both ways in the search results then both of these options are viable and something that you could use. I personally, I'd focus more on where you have a reasonable length of breadcrumb items there in the sense that if you have too many items as breadcrumbs then obviously we trim some of those out and show like the dot dot dot in the breadcrumb area and that might make it look a little bit confusing to me. So that's more what I would focus on rather than whether or not to include the last URL as well. It sounds like we don't have a policy in that regard but I'm not 100% sure. Do duplicate articles with the same links get ignored by Google or does have a negative effect on rankings? Do you suggest disavowing duplicate article links as they're copied without our authorization? Not really sure in which direction you mean. Is this like other people are copying your content and you're wondering if that's having a negative effect on your site or is your site providing the same content as other people already have and you're wondering if your site will kind of disappear. So assuming that other people are copying your content for the most part that's something we deal with fairly well. Sometimes they're edge cases that make it a little bit harder for us to deal with. One thing you might want to consider doing if you see this as something that's problematic in the search results for your website in that people are taking your copyrighted content and republishing it in a way that you don't agree with then you may be able to take legal action on that by using the DMCA process. That's something where you'd want to get legal advice from from a lawyer or someone who knows this process for fairly well at least. We can't give you legal advice on this. I'll tell you if you should use DMCA to deal with something like that or not, it's an option that is available there. If that makes sense in your case that might be something to do. We're a media company currently in Google News and let's see, really long question. I think I channel with you guys on Twitter as well and you have a bunch of forum threads. I did look at this with the team specifically around the top stories carousel there. And from what we saw there, this is essentially just normal organic ranking in the sense that there's nothing manual kind of stuck with your website or otherwise causing like a problem or holding it back. It's essentially just normal organic ranking and the top stories carousel is something that is put together organically and based on normal ranking factors. So it's not the case that there's like any list on our side where we're like, oh, this website is bad. We should not show it in the top stories carousel. This is essentially just organic ranking. So I did double check with the team on this again after you posted on Twitter recently. And I haven't heard back from them with regards to kind of a new update but from previous times I've talked to them about this. They say this is essentially working the way it normally would work. We're a large Dutch travel portal. We migrated our listing pages mid December. Since my, since the migration, we see some of our bigger listing pages not ranking anymore from the most logical high volume keyword but now ranking for lower volume, long tail keywords. URLs and content haven't changed. Google can crawl all of the content. What could be happening here? So in general, it kind of matters what you did with your migration there. That's something where even if you keep the same URL and have the same main content on a page, if you significantly change the layout of your pages then that can have an effect on the search results as well. Sometimes it has a positive effect. If we can crawl your site a little bit better. Sometimes that can also have a negative effect in that maybe we can crawl your site less well. So that's something where I would kind of think about all of the changes that you did with that migration and double check what things looked like before, especially with regards to things like internal linking, the internal anchor texts that you use between internal linking, all of this can make it a little bit easier or a little bit harder for us to crawl a website. So those are things that I would go through and kind of double check. Even if you have the same URLs and the same primary content that can play a role there. All right, let's see. Think we are reaching the end here. What SEO services and tools would you recommend using? Search console, of course. Past that, I don't know. I don't have a lot of experience with the traditional SEO tool. I know there are some that people speak really highly of. I've seen some that are really, really neat, but I don't want to make any recommendations here from a Google point of view. So what I would do if you're interested in good SEO tools or services, I would ask the general community and not like someone from Google because we can't really make these recommendations. All right. Wow. I think we made it to the end. What else is on your mind? I could help. I had a question. So in a previous hangout, I posed a question about sometimes we have to use banned words when we report about celebrity. So for instance, I'm wondering whether we should redirect, when we redirect, for instance, on a story, are we passing on this negative aspect to that tag or should we just remove that story from the index? I'm not too wedded to it. I'm more wedded to cleaning up in one particular instance of an actor who was arrested for possessing child pornography. Frankly, my view is I'd rather not happen on my site anymore any reference as an ethical decision. So redirects there shouldn't play a big role. I don't think that that would make a big difference either way. Usually what happens is we try to look at this on several levels. On the one hand, we try to look at the website overall and think about like is this a website we should classify as being an adult content website, which is probably not the case in your situation that there's just like so much normal content here and there are individual places where there might be something where we see adult content type keywords. That shouldn't be a problem there. Sometimes we can recognize sections of a website as being adult content oriented. This might be something, for example, if you, I don't know, have a lingerie store, then maybe you have a section for kind of the more adult type content and maybe you have a section for like everything else. And if we can clearly recognize those sections, that makes it a little bit easier for us as well. When it comes to something like your website where I assume these are just individual articles that are not classified in like one sub-directory for adult content type thing, then we try to do that on a per page basis based on the content that we have on the page. And sometimes individual keywords, they can skew things for us and that we see those keywords and we're like, oh, this is like we have to be really careful here that we don't accidentally double this up in the wrong place. Oftentimes we can still recognize that this is something that's mentioned in the context of a bigger picture. So a common use case there is, for example, medical content where obviously you use a lot of words that you also find on adult content web pages, but you use them in a way that actually are meant to be adult content. And I suspect with your website, it's probably trickier in that we can't clearly recognize like what is the context of these adult content. So if we find them on your pages, then it's possible that we will just say, well, those individual pages are potentially more adult oriented than others. One thing you could do to kind of guess if that's a problem is to search with safe search on and off and we see any differences there. And sometimes it's really clear that like you turn safe search off and actually all of my website is suddenly visible again, then you can kind of guess, well, probably Google is classifying my site as being more adult oriented than not. And then you can think about what you need to do or if anything, to kind of help clean that up. We have a form in the Help Center if we're classifying your website as a whole incorrectly where you can tell us about that. I suspect in your case it's sort of like we see those words on a page, those words disappear at some point, then fine, we move on. Yeah. Thank you. John? Go for it. Not only Webmaster Search Console, you have a lot of useful tools like Google Analytics, Google My Business, mobile tests, test my website, structure data, page speed inside, and mobile friendly tests, AMP test is a lot of tools. Thank you for that, we appreciate it. Thanks, good to hear. Yeah, there are a lot of things that's true. Yeah. They don't all fall into Search Console directly. Lots of stuff. Yeah, yeah. And we use them. It's not only for just like that. Fantastic. Good to hear. Hi, John. Go for it. How do Google consider that site navigation in the right side or the left side? That's totally up to you. That's, it falls into the category of what we call the boilerplate on a page and it doesn't matter for us if that's on the left side or on the right side or on the top or on the bottom. We try to recognize what the primary, the kind of changing things are on a website and the rest is essentially things that don't change and we can deal with that if it's left or right or top or bottom, it doesn't matter. Okay, do you consider it like a text link or you consider it as a navigation for that one? That's totally up to you. As far as I know, we don't look at it from... Okay, if we remove those things, it will impact my ranking or something from the ACP perspective. Maybe. So we do take into account that kind of text on the side, left or right, and the internal links that you have there, they do help us to better understand your website. So we can crawl through your website a little bit better. We see with the anchor text that you have to the link, we recognize a bit better what the connection is between these pages, what is like there. Then that should just continue to work. Hi, John. Hi. Just a quick question. Do you plan to develop an application for mobile devices of Google Search Console in the future? A mobile app for Search Console? Yes. I guess it depends how far in the future you want to think. I don't know of any plans. I know the new version of Search Console, the plan is to make it mobile friendly so that you can use it on a mobile browser. I don't know of any plans to make an app for that. So far, I think the team is happy to focus on just making new and improved features for Search Console and when they make those new features to make them available in a mobile friendly way. But maybe apps will come up again and be something that everyone needs to have and maybe we'll reconsider that. It might also be that maybe the PWA model is something that really grows in popularity and in additional functionality and maybe it makes sense to go down that road. I don't know. Hard to think. Would you do that? It's very convenient to use Google Analytics and TedWords on mobile devices. I think it will be very good if we have an app there. But it's up to you. Yeah. So one of the things that the team was always kind of worried about is that a lot of the things in Search Console are issues that you need to actually change on your server. So if you have a lot of problems or something, then you need to fix that on your server. And they always felt that if you have to change it on your server, then there's no way you can do it on a mobile phone. How do you feel about that? Does that match what you use Search Console for or the things in Search Console that you think would be really useful from a mobile device? I think it will be more useful from an analytics perspective, just to observe the websites where we are not in front of our computers. OK. So the search analytics, the keywords, people are using to search, those kind of things. Yes. Yes. That sounds cool. OK. We're very often on the plane, for example, in a commuting. And I think it will be useful just to check something and compare. And just for the general idea is from an analytics perspective, as I said. OK. Cool. Yeah. Make sure there's nothing happening or to kind of get some creative ideas on what you could be changing to improve the rankings or the different keywords that you're looking for. Cool. Yeah. Pass that on to the team. OK. Thank you. I think it will be very useful when we have some link, when we have some error to have a link read for this type of errors more. Because you have this, like articles. OK. So more work between the individual parts of Search Console and the help information that we have. Yeah. That's what you're asking. Yeah. OK. Cool. Yeah. Sounds like we need to work on our website, too. OK. Cool. All right. So maybe we can take a break here. Thank you all for joining. We're coming at crazy time, Michael. I hope this was a bit useful in looking forward to hearing more details from you. Thank you so much. And also, thanks for all of the other questions that were here. Thanks for joining in. And hopefully, we'll see you again one of the next times. Have a good day, everyone. Thanks, John. Thanks, John. I think we got to the top of our heads.