 All right, welcome, everyone, to today's Google Webmaster Central Office Hours Hangout. My name is John Mueller. I am a webmaster trends analyst here at Google in Switzerland. And part of what we do are these Webmaster Office Hours Hangouts, where webmasters, SEOs, anyone interested in web search related things can jump in and ask questions. As always, if any of you want to get started, feel free to jump on in now. Hello, John. Hi, Mia. I have a couple of questions. One is regarding this search result. I was wondering if you could tell me if that is a knowledge graph card or is it a feature snippet, because it doesn't really fit either. It's not a feature snippet because it doesn't show all the results. And it doesn't seem to be a knowledge graph card because that doesn't look like an entity. So it's a bit weird, I guess. I haven't seen it on any other search, so this is the kind of one I'm asking. I see a normal knowledge graph panel on the side. So we're seeing something different. Yeah, but that query isn't about a specific entity. It's just it's not an entity, so to speak. It's just somebody. I think what Mia is saying is that it looks more like a feature snippet than a knowledge panel. And I think what you're saying, John, is that they're kind of the same. But I suspect you all are seeing something different than I'm seeing. I just see the thing on the side. Right. But that doesn't look like an entity. It's just a normal query. It's like people who are using that technology and are looking for how to monitor it. And I have no idea what that technology is and what that query means. So it's hard for me to kind of estimate what should be there or what shouldn't be there. It looks like a knowledge graphing to me. Right, but it's like a generic query that shouldn't have a knowledge graph. Yeah, pretty much. OK. It's like searching for, I don't know, ACO tools. And then you see a knowledge graph card for that. OK. And it seems that you're picking it up incorrectly. It's hard for me to judge. OK, I'll send you a couple of similar queries your way. I've seen this, especially regarding these kind of very technical technologies usually. I think it's the same if you search for NGINX monitoring. I think you'll see the same. And that's definitely not an entity. I mean, NGINX is an entity. But NGINX monitoring shouldn't see anything there now. Oh, OK. Videos and ads and blog posts. Yeah, but that is basically just another technology, just like. OK. OK, I'm happy to take a look. It's really hard for me to judge these kind of queries when I don't know what's involved. OK, I was just curious if that specific website that shows up for that knowledge graph card, are they kind of abusing the system a bit? Or is it that normal? I suspect our systems are just picking that up and think that that's the right one to show. OK. I don't think there is anything kind of malicious happening now. It's probably maybe our systems are just confused. OK. And one other thing was regarding that August 1 corp update. I've noticed, at least for Romanian results, there were a few websites that it's kind of weird that they got impacted. And other websites that are far less, it's kind of subjective, of course. But it seems that the sites that grew results are kind of less quality content. Or some of them don't even have a mobile version. And they just jumped up in results. And outranked websites that had a lot of good content. And I'm wondering if you've seen this because in the rock world, I've seen this on especially European markets, non-English speaking countries. I've seen a lot of people kind of with the same issues seeing normal sites get downranked. And some weird, not weird, but lower quality usually sites getting better, better positions. I wonder if it's something that you're still working on or if it's kind of done. And you're kind of happy with this current state. I mean, we're always working on the search results. So sure. If you have examples to send our way, those are always useful. I haven't seen lots of negative feedback in that sense from lots of European countries. I have seen individual pieces of feedback from some Italian people on Twitter posted. So maybe it's limited to individual sites or individual types of sites. But I'm always happy to pass these on to the teams to take a look to see, are they doing it right here or is this essentially just like we have two choices and they're both kind of equal and one of them just barely makes it and the other one doesn't. So were there any cases where individual cases, as you mentioned, where you think that shouldn't have happened right here or? I didn't notice anything specific. So that's not something where I had anything really direct and specific that I could pass on to the ranking team and say, hey, you're doing it obviously wrong here. I suspect maybe the folks that are working directly on individual kind of other markets, maybe they have something that they passed on to their teams directly, but overall, I didn't see anything really that stood out where I said, well, we're obviously doing it wrong here. I mean, there are always situations where we get things slightly wrong or something slightly confusing, kind of like the knowledge graph thing that you showed. But for the most part, I didn't have anything totally obvious that I could point out to the team. OK. Sometimes this is also a matter of getting the right feedback. So if people complain on Twitter and say, oh, all of the Romanian search results are bad, it's like, I can't do anything with that. I can't go to a team and say, search in Romanian, you will see bad results. I really need kind of generic queries where we can say, well, these are things that users are really seeing frequently enough that it makes sense to take this specific report seriously and where there are obviously really bad results on top and really good results are completely missing. So kind of the really kind of things that you can pass on, they can look at it and say, oh, yeah. We're obviously doing something wrong here. We need to fix this as soon as possible. OK. Yeah, I didn't want to get into more details, so I don't. Cool. All right. I can set that up for you and decide what needs to be done then. Cool. John, can I ask a quick question? Sure. OK, so three very small ones actually. But you can always cut me off if I ask you in the model. So I'm studying quite a lot around mobile UX, obviously with the mobile first wave coming through. There's obviously this challenge with very small real estate, desktop websites, a lot of content. I don't know whether you're familiar with things like guest alt principles where things are smaller, but they are more usable, et cetera. And they use things like maybe icons to replace text, things that are intuitive and people are used to seeing, et cetera. So if, for instance, you've got a separate mobile site and a separate desktop site, and obviously now we have to have this content parity situation because of mobile first. If you were to just reduce down a lot of the kind of superfluous waffle that often is on desktop websites and boil that down to include things like there's a lot of structured data past strength and semi-structured data and unordered lists, et cetera. Things that are much more space saving and use things like icons and kind of right hand sidebars instead of dropping things below the break points. Would that kind of form almost a type of equivalency? In other words, you don't necessarily have to copy and paste all the waffle from a desktop site and plug it into a mobile site for parity to be connected as equivalent, what do you think? Sure, yeah. I think you have to be careful that you still have the relevant content on there in a way that is machine readable. So if you replace, for instance, the menu with just icons and you don't have anything like an alt text or any descriptive elements there, then we will lose those anchor text and we'll be able to call but we won't have that information. But the point is, if you replace it with data that is machine readable, so that it's better for humans in terms of a cognitive load perspective, but also there's an ability to interpret it. So say you just cut out loads of waffle there. I know Barry's laughing at the word waffle. I don't mean that one that you have for your breakfast. Me, just superfluous nonsense. But do you see what I mean? You have this challenge of trying to take these traditional desktop signs into... I know accordion, I know that, yeah. But sometimes even those are not necessarily amazingly useful. So I would just want you to check whether if you can somehow balance out equivalent parity, I'm thinking parity doesn't have to be copy and paste or even plonk it all into an accordion or a constantina. Certain things can have the same meaning to query clusters, but be presented in totally different ways, can't they? Yeah, yeah, yeah. Definitely, I mean, you can always look into doing, but you still need to be careful that you really have the relevant content on the mobile version and don't just imply it. So that's something that we sometimes see when sites simplify the mobile version too much. Then things like screen readers suddenly stop working on mobile because that's something that's relevant on mobile too. Then things like anchor text suddenly gets lost. All of these things, they still play a role. So especially when you think about the mobile first indexing step, that's something where we will only use the mobile version. So if there's kind of additional context that's only available on the desktop version and we only have the mobile version, then that additional context will be gone. And like you said, a lot of the text can be reduced. That's something where we still see a lot of e-commerce sites on their category page. Essentially, we have a Wikipedia article on the bottom kind of like, oh, you can use shoes for walking. You can put shoes on your left foot, on your right foot. And it's like all of the stuff that essentially is old school keyword stuffing, which essentially we're ignoring already. So cutting all of that out doesn't necessarily mean that those pages won't rank for those text because we can still recognize, oh, there are shoes on this page. We can rank this page for shoe type queries. But at the same time, if you reduce it too much, then suddenly we just have a list of images of shoes and we have to kind of guess like, what is this site really about? What is this page supposed to rank for? Because we have almost no text at all. So what about tool tips? I'll move on in a sec because I don't want to hog the time, et cetera. But so for instance, we know that on mobile, from a user perspective, hollers don't work with the title tool tip thing. But I'm thinking that actually in terms of ways to send relevance and pass to disambiguation, obviously data is massively important. And a tool tip, it's in the rendered code, isn't it? So that might be another way to add a little bit. I'm thinking of things that actually are not going to upset humans who are, like, overwhelmed with all this nonsense. But search engines can understand. Yeah, that makes sense. OK, OK. Can I ask one more quick question? Unless anybody wants to jump in. Good point. Anybody? OK, so say you have a category page. And the category page, actually, really, you know, you mentioned the e-commerce with shoes, yeah. So say the category page has subcategories of red shoes, blue shoes, blah, blah, blah, blah. So that color is probably a really bad example because we know that shoes, color is a funny thing to actually try and categorize with. And really, they should all just be in, like, maybe not subcategories as such. But so you had high heels, flats, et cetera. Even on the category page, all the relays on there are products from the subcategories. The category page could really probably be impacted quite negatively, couldn't it? Because the shoes that are in those categories, the subcategory page is often ranked. And I'm seeing a lot of category pages than don't because there's nothing that adds any value beyond what's already in the subcategories. I guess you'd have to look at the type of queries where you'd expect that page to rank for. So if you have a shoe store and people are searching for buy shoes online, then maybe a category page would rank there. But on the other hand, it's probably rare that people are just searching for something so generic that kind of a very broad category page would rank. So that's kind of something that you have to balance there. What intent are you covering with these pages? And is it worthwhile to cover that intent? It might be that you say, well, I don't care for these two generic queries because these people will just go to, I don't know, a million other shoe stores anyway. They're not going to be tied in with the content that I have available. On the other hand, if someone is explicitly looking for, I don't know, buy running shoes online or a specific brand or something, then that's something that you can work with. Or you can say, well, I have really unique content that I can provide here. I have a good offer that people will want to take advantage of. What about if all you sell is shoes, just as an example? And you talk about this generic, maybe people don't necessarily look at just generic things. Well, obviously there's a lot of all you. Sometimes if all you sell is shoes, would the homepage also be, perhaps, if you were just looking for, because again, that's something else. I've got a bit of where almost the homepage pops in and out. I know that that's navigational, et cetera. It's just an example of shoes, by the way. So that's what I'm saying. Do you need the homepage and also a shoe category of all you sell is shoes? I know that it's all, it depends. You just say it depends. Yeah, I think that really depends on what you're trying to sell, what you're offering, what kind of the unique value of those pages is. And I think navigational pages are fantastic, because if people are looking for your brand and they land on your homepage, what more do you want? They're not going to go somewhere else if they're looking for your store. But if it's not a navigational query, then that's something you kind of have to consider. Is it worthwhile optimizing for something so generic like shoes, or is that something where you kind of think, well, people that are just looking for shoes online, what can I even offer them on our site that would satisfy all the possible intents behind something so generic? Okay. Okay, yeah, thanks. Yeah, that's great. Thank you. I hope someone out there has a shoe store that can take advantage of all of these things. Thanks. All right, let's look at some of the submitted questions. The first one is kind of specific, I guess. I'm working on a site where designers make a lot of nested divs. One of the divs has font size zero, and then later a UL tag and list item anchor. And on the A tag, we have font size 13. I can see all of the content, but it seems that they're invisible for Google. Is this correct behavior of web rendering services? I don't know. Probably depends on what exactly you're looking at there. That sounds like maybe we're picking up the CSS incorrectly. Maybe there's something else kind of interacting there that's making it hard for us to pick that up. But in general, we use a Chrome version to render these pages. So if Chrome can show that, then that should work for Google indexing. What you can also do in cases like this is explicitly look for that content in search. And if you can find it in search, then obviously it's working. How can we index our backlinks fast as a Google URL submission tool is not working. So I assume you mean URLs on your site that you're trying to get indexed because backlinks don't really get indexed. And in general, what you can do there is you can use the fetch as Google tool in Search Console. So you have to verify your site first, use the fetch as Google tool there. And from there, you can submit URLs for indexing. So that should continue to work. That's it. In general, most sites should not need to do this manually. It's something that I assume most sites out there have never done manually. In general, we do crawl and index the web in a natural organic way. And we pick up new URLs as quickly as we can. We index them as quickly as we can. You shouldn't need to rely on manually submitting them to Google. Is the mobile usability that's shown in Search Console a part of the mobile ranking factor? So I'm not exactly sure what the mobile usability report shows in Search Console. I need to double-check for the details there. But in general, the mobile ranking factor essentially means that we think a page is mobile-friendly. So it fulfills the criteria that we have for mobile friendliness. And because of that, we can rank it slightly higher in the mobile search results. And mobile usability, I assume, covers pretty much all of the aspects that apply for mobile friendliness. So I'd certainly take a look at that. And a lot of times, there are issues flagged there that are easy to fix. So that's always worth cleaning up and making sure that things are aligned. What about the images in new Search Console? They show as soft 404. What can we do in this case? So in general, the soft 404 applies to web pages. So that means HTML pages that we would index in normal web search. So if you're testing images there, for example, with the Inspect URL tool, then the status there would not be reflective of what we would show in Google Images. It would only apply to Google Web Search. So that would be kind of normal for us to say, well, this image is not a web document that we can show in Google Web Search. Therefore, maybe we'll show it as a soft 404 or some other status code in the Inspect URL tool. What do you think about injecting JavaScript canonical tag with Google Tag Manager to HTML pages? Can Googlebot see it or not? So there's some blog posts about this out there. What about this? So in general, I think using something like Google Tag Manager is a pretty cool way to modify pages. But you have to keep in mind that this is something that runs fairly offset to the rest of your website, to the rest of web search. So essentially, that means that only search engines that are able to render your content would even see this. Only search engines that are able to process the Google Tag Manager scripts and what they kind of do in the background would be able to see this. So if you're sending something like a rel canonical tag, which is relevant to lots of different search engines, and you only use the Tag Manager to submit that, then only Google would be able to see that. That might not be so optimal. So that's one thing. The other thing is that because it's all in JavaScript, it means that we have to process the JavaScript first in order to be able to see what is added with JavaScript. And usually that involves a time period after the initial crawling of the HTML page, after the indexing of the initial HTML page, where we have to queue things up, get things ready for JavaScript processing, pull in all of the JavaScript files, all of the CSS, process the page like a browser would do that, and then take the final result and use that for indexing again. So you have this kind of timeframe from us taking the HTML page, processing and indexing that to maybe a couple of days later, maybe a week or so later, us using the JavaScript version that we get from rendering and processing that one. So there's always this kind of delay between those two setups. So if there's anything critical that you're adding with the Tag Manager, you have to keep in mind that new things will take a certain period of time to be processed. And the other aspect here that also comes into play because of this delay is if you're doing anything different in Tag Manager than you have in the HTML pages, then you have this conflicting state change between the two versions. So if you have a canonical in the HTML page and the Tag Manager changes the canonical to something else, then for initial period of time, we'll follow your initial canonical or we'll take that into account. Then later, we'll switch to the JavaScript version of the canonical. When we reprocess that page, we have that situation again. And depending on the difference that you have there, that can make it very tricky. So canonical is a hint for us. So it's probably not 100% critical if we get exactly the right one or a slightly different one from time to time. But if you're changing something like a no-index robots meta tag on a page using Tag Manager, then that completely changes the meaning of the page. If you're adding a no-index, then we have this time where we see the page as indexable, and then we see it as non-indexable. If you're removing a no-index tag, then we initially see it as non-indexable. It might be that we never process the JavaScript because we think, well, what is there to work on? The webmaster already told us this page shouldn't be indexed. So those are kind of the things to keep in mind there. And I assume this state will continue for quite some time. So it'll be tricky for us to get the crawling and rendering completely together. That's mostly for resource reasons. It's something where we can't render every page immediately after we crawl it because that would require us accessing all of the JavaScript and doing all of the rendering stuff immediately after crawling. So you'll always have this kind of time offset between the, or I don't know, maybe not always, but at least for the time being, you'll have this time offset between the HTML version and then the JavaScript rendered version. So that's something that will always be kind of tricky there. And of course, other search engines. As I mentioned in the beginning, if you're adding something that's relevant to other search engines, then that would likely get lost until they also do rendering. So can you do it? Sure. Is it kind of the best way to do a rel canonical? Probably not. Presumably though, John, have you seen a set presumably in between that delay? Is it called dynamic rendering now or something like that? Is that what it's called, that where you have that two stage of rendering? Or is it just two waves? I can't remember. If you talked about it, IO, didn't you? Give a talk. So dynamic rendering would be if you serve Googlebot the rendered version of media. Oh, yeah, I'm getting confused. That's something that you could do to prevent this kind of two stages of indexing by essentially rendering it for Googlebot, rendering it for other search engines as well. So that could potentially be a solution. Sure. OK. But if you didn't do that and you had a lot of cruft, et cetera, and legacy conflicting canonical signals, by the time that second stage JavaScript injected by a tag manager came around, all the other signals, lots of other signals from other URLs that had been gathered normally in the normal way could have obviously just made that even more messy. I'm all right. I think with a canonical, it's usually less of a problem because we already have multiple signals. And if the rel canonical is the only thing that's pointing at one specific URL, then I assume in many cases that wouldn't be taken into account anyway, where if we see the internal linking is all going to this URL and then that URL has a canonical somewhere else, then internal linking is pretty strong signal for us. Or we'd say, well, probably we just use the URL that all the pages are linking to, even though the canonical says pointing at someone else. So I think for something like the canonical, it's less of an issue because we should have a backup of all of those signals. Anyway, we should have a confirmation that actually, this URL is the right one to use. Croft would make it a lot more challenging overall, wouldn't it? Sure. And things like e-commerce sites that have all of different parameters and lots of different ways of crawling, they make it really, really interesting. OK. So Jeff, you're saying that regardless of how fast that JavaScript renders or is being loaded for users, that doesn't really matter to Google because Google does its rendering on Google's site. It's not like if it doesn't really matter if it renders within 500 milliseconds or five seconds because you do the rendering on your site anyway. With the time that it takes to rendering, it's something where we try to have a reasonable time out so that we don't keep our computer spinning all the time processing individual pages. So that's something where I'd still keep a reasonable time in mind, just so that if you have a million pages and each of those pages takes a couple of minutes to actually render, then you're going to keep Google really busy and weren't going to be really slow with your website. You mean render on your site again? Yeah. OK. OK, got you. So you mentioned tag manager and head tags like Monica and Noindex. What about these JavaScript things that you see, for example, for some sites, until the page is fully rendered, you just see a loading sign and there's no actual content within the HTML. And then when it's fully loaded, you can see the content and it may take two or three seconds. So what you're saying is that can have a negative impact because you first, you're not seeing any content and you have to take some time until you get to the render version and index the render version. It depends on how they set that up. So if the content is in the HTML and the spinner is maybe just loading a video in the background or images or something like that, then in the HTML we already have the content that we can use for indexing. But if all of the content is pulled in with JavaScript and it really takes a while to pull that in, then the initial indexing will be this empty HTML shell where we don't have any content or not a lot of content and it does take us a while to actually render those pages so that we can pick up the full content. So for a lot of sites that have static content, that's a lot of an issue because if it takes a couple of days to get that index, nobody's really going to care. But if you have a news website or anything that's really fast changing where you need to have content index as quickly as possible, then obviously that delay makes it a lot harder for you to stay competitive and keep your content fresh in the index. Is this something similar to how you process CSS files as well? I mean, you are taking into account position of certain elements of the page or certain content. Do you do that on server side as well? Like you're first just getting the HTML and then you're rendering it with a CSS to kind of take into account all those different things? Design related things? I suspect we do it slightly differently there where we can optimize things a little bit better because a lot of times I imagine CSS is easier to process than all of this arcane JavaScript that goes back and forth between servers and stuff like that. But it's also something where I mean, if we have the HTML with the textual content already, then that's a really big step. And if it's just a matter of is this on top or on bottom, then that's almost like tweaking things. It's not a matter of completely changing the meaning of a page. Sure, but you can have a certain CSS line that kind of hides a portion of the content under a certain tab or it might make it harder for users to get to that piece of content so that can have a bigger impact versus other CSS rules. OK. All right, let's jump to some more questions. Mobile First Indexing, how important is content parity between mobile and desktop home page for a new site? I've heard conflicting updates on this and would like to get a straight answer. So there are two aspects there. On the one hand, we're trying to switch sites over to mobile first indexing when we can tell that they're comparable on mobile and desktop pages. So in general, if we see that the mobile version has significantly less content, images, structured data, internal linking, or any of that, then we probably wouldn't switch over to mobile first indexing for that site for the moment. On the other hand, if we do switch over to mobile first indexing, then we will only use the mobile version. So content parity at that point once it is switched to mobile first indexing is not a question anymore. We only use the mobile version for indexing. That's the only version that we use to rank those pages at all. So those are the two different aspects there to keep in mind. John, can I ask you a question again on that? Sorry, it's just a really quick one. What if the content is equivalent across the mobile and the desktop, but on mobile, it just looks a mess? Well, will you consider that to be equivalent? Would you index the mobile site? I mean, obviously, Googlebot's probably not going to be there saying, well, this looks a mess. But the point is it's a negative experience if somebody gets to pages and it just looks an absolute mess, even though the content is the same. It's not a good user experience, is it? I don't know how we will treat that. It's hard to say what content is a mess would kind of mean. It sounds like maybe it's just not mobile friendly. In that case, we would still use a mobile version for indexing, but we wouldn't give it the mobile friendly version. But if it passes the mobile, yeah, there's a lot of sites out there that pass the mobile friendly tick box, but they're an absolute mess. That's the point, though. That's the kind of up to them. I mean, with mobile first indexing, if we can recognize there's a mobile URL and a desktop URL, we'll still show the appropriate version. So it's not that we won't send people to the mobile version or to the desktop version. Afterwards, that difference will sort of be in the search results, but we'll really just use the mobile version for indexing. And sometimes, if they're ugly, if the layout is a mess, if the text is all the same and the content is essentially the same, we can still index that content. And then it's more a matter of are your users converting or not, which is between you and whoever goes to your page. And it might be that your site is indexed well and ranking well, but nobody is converting. So what use is that in there? What about if, on the right-hand side, on desktop, you have a sidebar with a list of other related searches or whatever, things that are useful. Mobile breakpoint, it kicks in once it gets to a certain size. And the list goes much further down outside of the viewport. That's going to have an impact, isn't it? Well, it's still on the page. So it's not as useful, is it? Maybe imagine that. I know you're not going to say, but things like reasonable surfer and that kind of potential for it to be actually useful link, that changes the whole experience, doesn't it, for a user? I think you'd see bigger effects on the user side than on the SEO side or something like that. But the user side, I think, is critical in the end, because if users can't deal with your content, they're not going to recommend it. They're not going to convert. Then all of the SEO work that you do from a technical point of view is essentially for nothing. So that's something where you definitely need to take into account both of those sides, like making sure that technically we can pick it up properly, but also that users are able to have a good experience there so that you have all of those secondary effects that come into play. You know, one final thing, sorry. So say you're using bottom navigation, would that be counted as in mobile first indexing? Well, from a ranking perspective, there's a mobile experience. That wouldn't be seen as something like a footer, would it? I'm pretty sure it wouldn't, but I'm just checking of ways in which you can make it easy for the user to navigate quickly and easily around the site but with this limited space. So things like sticky footer menus, they're still as useful as the ones at the top. They could be, sure. OK. And just. Sorry. Yeah, but then on mobile, the pages get so long that, and I know you like the quote unquote above the fold, interesting if the footer is being pushed down so far on mobile, depending if you have a lot of good content, is that going to hurt us? No, I don't see that being a problem with regard to SEO. We can still pick that up. We can process those pages. I don't see that being a problem from kind of an SEO and signal passing point of view. It's really more from a usability point of view that you might want to look at them. So is this something that is really important for my users? And if so, how can I make it useful for them? But that really depends a lot from site to site. All right, let me jump through some of the questions that were submitted so that we can cover some of those as well. How can we add a Knowledge Panel for News Agency website without getting an address to our Google search visitors? This is something that we pick up organically over time. So if we can recognize that this query is essentially leading to your website and we have information about your website, then we'll set that up. If you have your website verified in Search Console, then that's something you can give us feedback on once we have shown Knowledge Panel for your website. But that's not something that you can force. What would you suggest for a website that's related to lyrics, so it's all copied content? What are some good ideas? So essentially, you're kind of going into that on your own as well in that if you're providing something that everyone else has, then there's very little reason for us to also show your website there. So you really need to kind of take a step back or think about what you can do differently than everyone else. And what is something that you could provide in a way that makes a lot of sense for us to send people to your website for? So don't just do what everyone else is doing. Really think about what is kind of unique that you could provide that has value that people would want to go explicitly to your website for. And some lyric sites have done some really neat things over the years. So it's not that this area is kind of impossible to create new content for. But it does mean that people have worked on this for quite some time. And you really have to kind of like up your game and do something really fantastic here to leave a good impression. My website was hacked about a month ago. I recovered the website within a day after being hacked. But the hacker injected some kind of links and generated thousands of pages with different URLs. It's all sorted, but I'm getting tons of 404 errors in Search Console. What can I do there? Essentially, if these pages are returning 404, then that's perfect. Over time, we'll crawl them less frequently. So you should see those errors less frequently. But I'd expect that we keep them in mind for a pretty long time. And we'll continue mentioning these URLs as being 404s for probably years to come. And having 404s on our website is perfect. John, somebody muted you. All right, whoops. OK, so I'm not sure how much of that you picked up. But essentially, 404 errors are perfectly fine and not something that you need to change. Mobile-first indexing, how to deal with a site or part of the content is not usable on mobile. So for example, an online game site that has flash games and JavaScript games. Will they be crawled? Should they be included in the mobile version even if they're not usable on a mobile device? I think that's something you have to think about yourself in the sense that what you can always have some part of your pages being on a mobile version and some part not. That's perfectly fine as well. What I suspect will happen with regards to mobile-first indexing here is that our systems will be a bit confused, not completely know if they can switch the site over to mobile-first indexing. If we have a lot of flash content and a lot of JavaScript game content, can we switch it over without causing problems or would switching it over to mobile-first indexing essentially cause more issues than it would actually help? And for the moment, we'd probably stay on the safe side. So if you're curious about mobile-first indexing, if you want to prepare your site for that, I might look into ways of making it easier for us to recognize that your site is ready for mobile-first indexing and that we can switch it over, which might be maybe you move your flash games over to a different site, something like that, to really make it clear for us this site is ready for mobile-first indexing and this other site here is not suitable at all for mobile-first indexing. And I guess, regardless of that, might make sense to rethink the general strategy on flash games. I don't know how usable they'll be in the future anyway, with browsers not using Flash by default. So that might be something to rethink anyway. I have a client whose search console data is showing a ton of URLs triggering errors. The URLs are all about shoes and rape and sunglasses and don't exist. We think they're from a past site and they were previously hacked. Is there anything I should be doing or looking for in this instance? No, that sounds similar to the previous case. Essentially, if these return 404, then you're all set. They'll drop out of the index if they're still indexed. We'll still retry them from time to time. Probably we'll still retry them for a couple of years at least. But if they return 404, that's perfectly fine. I am getting a ton of soft 404s in Search Console. The vast majority are not in Anex, so I don't have a lot of info on them. I do see some examples like this one where the original URL, I think, moved to HTTPS. Yes. So I probably need to take a look into the details of what exactly is happening there. But that's something where, in general, if a URL is redirecting somewhere else, then we shouldn't be picking that up as a soft 404 unless you're doing something like removing a lot of URLs and redirecting them to one common page. Then that would be something that we might recognize as a soft 404 signal. But if it's just one URL redirecting to another URL, like you move the URL from one place to another place, we generally wouldn't be picking that up as a soft 404. If you're seeing that as a soft 404, it might make sense to just submit that again through Search Console so that it can get reprocessed. But in general, we reprocess these from time to time anyway. So we probably pick that up as a URL move anyway and be able to process that properly. How to protect from competition creating bad or toxic backlinks and ruining our rankings? This event was too late. Anyone can create bad backlinks. So in general, we're pretty good at recognizing a lot of these bad backlinks and just ignoring them. That's something where our algorithms have a lot of practice with that. We have a lot of feedback from the web spam team on our algorithms there. So we're generally pretty good at picking that up. The disavow is a perfect way to take care of that if you're worried about that. That said, a lot of the cases that I've looked into where people are complaining about negative SEO seem to be either normal ranking changes that happen in search all the time. So that's something that can always happen. It's not the case that because the site is ranking high now that it will always rank high in the future. That can happen all the time. What we've also seen every now and then is that a site owner will be doing lots of weird link things in the background for years and years. And then at one point, someone else does weird link things with that site. And then they think, oh, this is negative SEO. And actually, it's just our system's picking up the old weird link things that have been going on for a really long time that probably you should have kind of stepped back from earlier. But regardless of that, that's something you can also kind of help clean up with the disavow backlinks tool, but also where you can kind of take a step and say, OK, I'll clean up all of these old issues as much as possible. I'll make sure that we don't do more of this in the future. And use the disavow tool to kind of take care of all of the issues that we've seen over time. In many cases, using the domain directive in the disavow tool makes it pretty easy to stay on top of lots of links. So you don't have to list them individually. You don't even have to kind of look at them individually. If you disavow that whole domain, then you have all of those covered automatically. And it's done fairly quickly. What did they just ignore, though, John? So somebody's got some legacy old rubbish from the past. A lot of times we could just ignore that. Yeah, so say the site owner's actually, you know, he has gone by download of articles and nonsense and stuff like that. And they wouldn't necessarily need to have a disavow file for that, would they? Or would they? For most sites, not, no. That's something where we have a lot of practice recognizing these things. But the thing I would keep in mind there is if you're aware of a large scheme kind of backlink activity that you've been doing or your SEO company has been doing, then just for peace of mind, it makes sense to clean that. So even if it was legacy from years back and you come across a project and it's like directories and all that carry on, would you still disavow those? Or would you just like say, well, that's just the way it was back then. And Google's just ignoring that stuff because of course, we're in this at this time. Probably we'd be ignoring that already, yeah. But I mean, it's something where if you really want to be sure and you're like, oh, I'm really scared that Google will stumble upon this and that someone from the WebSend team will say, well, this site has been doing sneaky stuff, then you can use a disavow tool. And then you kind of like you have it cleaned. OK, let's know it. Thank you. All right, I think those are pretty much it. There's one more question about a specific site that I don't see the URL mentioned, so it's kind of hard to figure out what exactly we need to look at there. But if you have a link to a forum thread and you can pop that into the Google Plus post, that would be really useful. Let's see, in the chat, what has been happening here? Lots of stuff. JavaScript canonical, we've talked about that. I've heard webmasters complain to me. They said e-newspaper and websites buy links. They needed a few suggestions from you. Yeah, you shouldn't buy links. I don't know what else we can suggest there. We've talked about links a lot of times in the past. Can I ask a question on links? Sure. So this is probably like a really easy answer, and I probably should know the answer. But you know, not everybody knows everything. So I know that people talk about the whole sub-demand and sub-directories thing, blah, blah, et cetera. So there's a project that I look after that I've actually decided, well, you know, there's like three or four different target audiences here. And I want to sort of split it out into different topics so that actually it becomes separate sites as such because each of them really helps a different audience completely, serves a different purpose, helps with different tasks, blah, blah, yeah. But I want to have it so that all of these are still connected and they're known to be part of the same organization. At the same time, I'm obviously concerned that it might start to look like a private blog network when the objective is not to do that. It's just to say, well, look, you know, this section is all the recipes. This, again, it's not a recipe, so I'm just using as an example. And these are all the restaurants that here, over here in this site, yeah. And these are all the something else that all the restaurants did and that actually serve another purpose, yeah. So it's kind of three separate sites, yeah, for focus and topical strength, if you like. And actually, I think it's more useful for the visitors to have it as very self-contained pieces. But at the same time, I just don't want it to look like a blog network if I link them all together. So should I disavow links in that respect because I remember Matt saying years back, well, actually, no, there's not really a need to do that because they all help different audiences and they're logical and they fit together, yeah. Yeah, I think that's fine. I wouldn't see a need to disavow any of those links. If these are essentially three different business groups within the same company, that's perfectly fine. And that's something you could use, different domains, subdomains, subdirectories, essentially up to you. I think the value of subdirectories is really that it makes it a lot clearer that these belong together and that they're all one big website. But sometimes you don't want to have that connection that clear. Sometimes you really want to target completely distinct audiences and maybe you don't even want them to kind of be directly guided towards the other content. So if you have something that you're providing for dealers and for consumers, then maybe you have something completely different, different experiences there. At that point, it's like you have a dual model and actually one audience has no interest in the other half of the model, yeah. So it kind of makes sense to split them but have that connection, yeah. That's perfectly fine, yeah. Okay, yeah, thank you. But John, do those connections you mentioned with the sub directory, it's kind of a very clear that everything is part of the same thing. Is that taken into account from a Google signals point of view or did you just mention that for users generally? I mean, does Google take that into account in any way or is it just like with 301 and 302 redirects where if you see 302 is for a longer period of time, you'll just consider them in 301, definitely. I think it's tricky because we can recognize that things are on the same website, even if they're in subdomains and we can recognize that they're separate websites, even if they're subdirectories. So there is no kind of like one size fits all answer there. In general, if it is in a sub directory, then that's a lot easier for us to say, well, this is all one part of the same thing and that helps us to kind of grow that bigger thing. But that doesn't necessarily mean that subdomains won't work there. What are the separate domains? I mean, this is specifically going to be separate domains, yeah. Yeah, I think like subdomains or subdirectories don't play a role anyway, yeah. So yeah, completely separate domain. That's something we see them as separate websites. We see kind of the links between the versions, which is fantastic and that's really all we need. And they'll be site-wide, to be honest. That's my point, that's my concern, because I don't want to talk like this is the big pvn that's being built. It literally is, well, this is an old site anyway, that actually needs to be on a newer, more usable CMS platform. So it's been redesigned, half of it is split into a different market and then there's a separate thing, which is to serve a different task of a user. So it will be like three separate, sort of three separate sites. That's perfect. That's perfectly fine. That's something where like, I couldn't see anyone recognizing that as a pvn. That is like, those tend to be quite, quite different, yeah. Not gonna be very private anyway, so I mean, there is that, so yeah, okay. Anyway, thanks, that's put my mind at rest, yeah, thank you. Cool. All right, let's take a break here. It looks like we have pretty much everything covered and time-wise we're running out. Thank you all for coming. Thanks for all the questions that were submitted. If there's anything else on your mind, feel free to drop that into the next one, which I think is on Friday in English. You're also welcome to pop by the webmaster help forum where lots of awesome webmasters like Mihai, for example, pop in and help out as well. So that's also a great resource to get more advice and more tips. All right, thank you all for coming and see you all next time, maybe. Thank you, John. Thank you, John. Bye, John, bye. Bye-bye.