 All right, welcome, everyone, to today's Google Webmaster Central Office Hours Hangout. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these office hour hangouts, together with webmasters, STOs, folks who make great websites. And part of what we try to do is answer any kind of questions that come up with regards to websites and web search in particular. So a bunch of things already submitted. But as always, if any of you have a question to start off with, feel free to jump on in now. Hi, John. Hi. Actually, I was searching something. And I came across that New Zealand and the UK I'm getting results from those business in the mobile version. OK. And you're not from the UK or? I'm from India. OK. So why am I getting those business? That can be normal, too. It kind of depends on your search settings and on what you're searching for. Sometimes local search results are valid in other countries as well. So it's not necessarily the case that I'd say that's wrong or something is broken. That can definitely happen, that you search for something and the best result is something that's from a company in a different country. But I'm searching for the location of the DL driving license. And it was from the New York and the UK side. That's why I'm asking. That sounds kind of suboptimal, yeah. So what I would do there is submit feedback in the search results. On the bottom of the search results page, you can always submit feedback. And that goes to our team that reviews this feedback and tries to figure out what went wrong with these individual searches. So that's usually the best approach there. I mean, it can also be other things such as us perhaps recognizing your IP address wrong or things like that. But if this is a one-off search that you do and you see results that don't work at all for you, then I would definitely use the feedback form. OK, fine. I will do that. Sure. All right. What else is on your minds? I could ask a question if nobody else has any questions to ask. Uh-oh. OK. I don't have to. I could just not ask questions. What's up, Ariel? All right, so I wrote about Google buying links. And obviously, Google has no intent of manipulating their own search results. So obviously, the whole concept of Google buying links to manipulate their own search results to rank for their own name is kind of ridiculous. But in the past, Google has penalized themselves for buying links unintentionally or intentionally or whatever the purpose might have been just to say, hey, we also could make mistakes. We also have to abide by our own guidelines. And we're going to go ahead and prove a point here. The past two times, Google has done it. They haven't really commented on it. And I'm just curious. I know I'm sure you guys have talked about it behind the scenes with not copying me on emails by accident about what's going on. So I'm just curious. Is there a reason why you're not commenting about this case? I mean, obviously, you might want to say, well, in this case, there's obviously no intent for it. Previously, you wanted to make an example. I mean, what's the feeling about this outside of why is Barry covering this and why is Barry giving me a hard time? I don't know what the web spam team in particular is looking at there. But one thing that is a bit different than a couple years ago is that there are things like the disavow file where you can explicitly say I don't want these links to be taken into account. So those are things that might always be playing in where essentially the link is there, but there's no signals being passed, and that's kind of taken care of that way. So those are things that, at least from the initial cases I remember, that wasn't available back then. OK, so it's more about Google better understanding those links and obviously not even passing the whole concept of disavowing or devaluing versus demoting with a new Penguin algorithm kind of plays into effect here where Google's not even looking at these links and they're not even benefiting Google, so you might not have to go ahead and penalize it anyway. So that's maybe the only difference between now and then. I don't know the specifics of what the web spam team has done there or what the people that bought this link or that did this, I think, as a sponsorship or a donation or something like that, what was happening there. But that's definitely a difference there. And a lot of these things are things that sites essentially can clean up on their own without doing a press release about it. So that's kind of my take there. Thank you. All right. So let's see. Hello. Hi. I have a question. I already posted it on Google+. But I already did here. If a website have a decleaning server in traffic, in place, and the progressive day index session following first quarter of 2017 because Google updates, how much time will it take until we can see enhancements if problems were resolved? Because we wait. Was it a indexing problem or is it a ranking problem? Ranking problem. Ranking problem. So generally speaking, we reprocess sites regularly. So that's something that happens on a per URL basis, meaning some URLs get reprocessed a lot faster, maybe several times a day. Some URLs take a little bit longer, a couple of days. Some of them take quite a long time, like a couple of months. And depending on the website and what kind of URLs are there, if you make bigger changes across the website, then that can take a bit of time to be reprocessed completely. So it's not the case that we have to understand the whole website again. But rather that step by step, as we understand the changes on your website, we can take that into account when it comes to indexing. Yes, there are tens of millions of pages. We need to wait. So that's not something where we have a fixed time, where we say after one month you will see this change, but it really can take quite a bit of time. One of the things we can look at into doing, especially with a really large website, is to think about what you can do to simplify crawling. So a lot of times with large websites in particular, we run into so many different URL variations that to crawl 1 million unique URLs with content, we have to crawl maybe 10 million or 20, 50 million URLs to kind of go through all of these duplications. So the easier you can make it that we crawl only the URLs that you kind of care about, the ones that are important for your website, the faster we can get through a lot of that. So things like using clean URLs on the website with internal links, using canonicalization methods, rel canonical, redirects, site profiles, using the same URLs everywhere, all of those things kind of make it a little bit easier for us to catch them. Thank you. Sure. All right, let me grab some that were submitted. So the first one, I don't have a good answer. I think I need to double check some things there. Glenn is asking about PDFs, the links in PDFs. I need to double check some things there, because it seems that there's some kind of conflicting information out there. And it would be good to double check the current status rather than just focus on things as they were in 2011 or on the way that I think they are now. So I'll double check on that and see if I can have something for you for the next Hangout. What is it? I think on Friday. So if you want to submit that again on Friday, I probably will have a better answer. Another question from Glenn here is, now that Chrome is taking action against sites violating the BetterEd standard, will that become a search ranking factor as well? So as far as I know, at the moment, this is really something only with regards to Chrome and from usability part of you, where if we recognize that sites violate the standard the way that it's defined, then that's something where Chrome might choose to not show any ads for them. So that's specifically with regards to Chrome. I'm not aware of anything that's essentially taking that same BetterEd standard and applying that to its web search, though obviously there are some of our algorithms that have similar things. So if above the goal, there's actually no useful content, there's just all one big ad, and that might be something where we take action. Or if on mobile, we've been official rather than the actual content, that might be something that another algorithm might take action. So it's not the case that these kind of issues on a website are completely isolated from search, but rather we just don't kind of one to one take these BetterEd standards into account with regards to search, at least at the moment. And I don't know if we'll change over time. I don't know how these standards will evolve over time. So that's something where we can't look into the future for you. But at least at the moment, that's OK. Can a service review only ever be used on one page or can it be shared on multiple pages relating to the same branch? I ask this as we have reviews for particular branches, and these are marked up. However, we also have some pages that are kind of general. We mark that up too. So in general, the structured data on a website or on a page should be specific to that particular page. It shouldn't be something where you kind of say, well, this is a review for business in general, and therefore we'll put it on all of the pages on a website, but rather it should be really specific to the primary topic of that page. So from that point of view, I generally shy away from kind of mixing and matching your views and trying to put together a set of aggregated reviews that are based on what you'd like to have shown on a website. So it shouldn't be like a testimonial that should actually be reviews. So from that point of view, I try to shy away from kind of mixing and matching and picking out individual things that might be seen as something problematic from the website. But what you could do is also post in the Webmaster Health Forum and get other people who are in similar situations and see how they implemented this kind of situation. And maybe there are some ideas there that let you kind of handle this in a more elegant way. And in our e-commerce site, we have a few categories. And with these, we have subcategories, and we have subcategories that we've never sold any products from. Is there any harm if we change these category pages to index no follow so that we can concentrate the link flow going to the main categories that actually do the work for us? From my point of view, you can do that. I suspect you won't see any visible change within a reasonably sized website. If you just change individual pages to index no follow, I suspect that's not something where you'd see any measurable effect on the website. It's probably a little bit different if you have a really, really large website with millions and millions of pages, and you recognize that some parts of your website are essentially crawler traps where the crawler kind of digs in and tries to get millions of other URLs. And actually, none of these URLs are relevant at all. Then that's something where you might say, OK, no follow from here on down just so that Googlebot doesn't get stuck in these kind of endless corridors within your website. But if these are just individual categories that you're trying to kind of optimize a little bit, then I suspect your time is better spent on other things rather than to tweak kind of the no follow tag on some of these links. Does link equity only flow downwards on a website, or can it also flow upwards? That's an interesting question. In general, the signals that we get and that we can pass on through links, they go through the links on a page. And a lot of pages have links to kind of higher-level pages. A lot of pages have links to lower-level pages. And the signals that we get, the signals that we forward, they go through all of them. So essentially, the signals that we have, four links, they go any direction in which you have links. And from that point of view, of course, they can flow upwards as well. If I have two internal links on the same page and they're going to the same destination page, but with different anchor text, how does Google treat that? So from our side, this isn't something that we have defined, where we say it's always like this, always like the first link, always the last link, always an average of the links, or something like that. But rather, that's something that our algorithms might choose to do one way or the other. So my recommendation there would be not to worry too much about this if you have different links going to the same page. That's completely normal. That's something that we have to deal with. We have to understand the anchor text to better understand the context of that link. And that's completely normal. So that's not something I worry about there. I know some people do SEO experiments and try to figure this out and work out, oh, Google currently does it like this. But from our point of view, that can change. And it's not something that we have. So even if you matter to figure out how we currently do it today, then that's not necessarily how we'll do it tomorrow, or how it always is across all websites. We changed our website three years ago. We're going to say, why? So we can still see into the control that Google has crawled the old redundant pages, even though they're for 10 pages. Why do they still appear? Should we put them in the robots text file? Is there like crawler time wasted on this? So from our point of view, if we know that pages used to exist, we will occasionally check them again. So that's probably what you're seeing here. We understand that these are 404 or 410 pages, or at least shouldn't be indexed, but we know about those URLs. And every now and then when we think, oh, we have nothing better to do on this website, we'll go off and double check those URLs. And if we check those URLs and see a server error or kind of a page not found error here, then that's something we'll tell you about in Search Console. And you can look at that and say, oh, yeah, I know about this. I removed those on purpose, and that's fine. So it's not something you need to block from crawling. It's not something you need to worry about. It's not that we're kind of losing crawl capacity by looking at those URLs. It's essentially a sign from us that we have enough capacity to crawl more URLs on your website, more just like double checking some of the old ones just in case you managed to put something back out there. Can it be more beneficial from an SEO point of view if you have some links to your category pages at the top of the page, as opposed to being in the navigation? Is there any benefit to either way? Surely from an SEO point of view, there is no difference for us. So that's something where you can really focus on usability and think about what works best for your users. If you're not sure what works best for users, you can also test that. From our point of view, we use these links to discover the different pages on your website to better understand the context of those pages within your website. So if we can find them at the top or at the bottom or in the middle somewhere, that's all perfectly fine for us. Question about Google for jobs. We're running a job site, and we included the proper markup. Now we're seeing the number of job postings in Search Console most of the time being significantly lower than the number of live jobs. At the same time, we're seeing a larger number of errors reported in the index coverage report labeled submitted but marked no index. We believe the issue here is crawl speed. Essentially, Googlebot seems to only fetch a few new detailed pages once they go offline again. Is there anything we can do to fix that? So that's essentially the same thing that you can do across any site with regards to getting new content index as quickly as possible. So there are two main aspects that kind of play into there. Actually, three. On the one hand, the server capacity, as we can perceive it. So how fast is your server? How fast does it respond to requests? Does it slow down when we ask a lot of requests? Does it serve server errors if we get a lot of requests? And as you can imagine, we don't want to cause any problems on your server. So if we see that our crawling kind of has an effect on your web server, then we'll back up and try to crawl a little bit less. So that can result in us crawling less on a website overall or crawling a bit slower just to kind of stay on the safe side. If you know that your server can handle a lot more, then that's something where you can go into Search Console. And I believe in the Help Center, there's a way of submitting feedback about Googlebot where you can say, well, Googlebot is crawling me this many times a day and you can crawl me, I don't know, five, five times a second if you want. My server can take it. And the Googlebot engineers will take that and kind of double check that that's reasonable and adjust that for Googlebot crawling. So that doesn't mean that we'll necessarily crawl that much, but it might change how much we would crawl at maximum. And how much we would actually crawl on a website is also dependent on how important we think that a website is or important that these pages are within the website. So the more we can recognize that this is actually really good content and we should really get to this a lot faster, the more we'll try to crawl it faster. So we'll still be limited by what we think your server has a capacity limit. But at least we'll be able to get up to that limit as much as possible if we think that this is actually a good site. And the last one, which is probably the easiest one to take into account, is we need to know about these new URLs as quickly as possible. So one way to do that is to ping the sitemap file for us once you've made any change there. And within the sitemap file, specify the URL and the last modification date of the pages. And if we recognize that the URL has a new or URL is new or has a new or last modification date, we'll try to crawl that as quickly as we can. And that's kind of the best way to get this content to us as quickly as possible. So I hope there was something in there that was kind of new for you or useful in that regard. If not, I also recommend maybe starting a thread on the help form with the URL of your site so that others can take a quick look to see if there's nothing really kind of blocking the site from being called as quickly as we possibly can. Let's see. A couple of questions combined. Unfortunately, celebrities, personal photos get hacked and leaked. And when my site reports on it, we have to use the word like make a photo. Or if there was an actor arrested for a child porn, I appreciate that some of these are banned words in gray areas, but how can reporters work within these parameters? So I suspect this is mostly around being a safe search where we try to recognize which URLs are kind of relevant to show, to a general audience, which ones we only show from a safe search kind of stable. From my point of view, I don't know if there's an easy way to kind of get around that to say, well, I'm using all of these adult content type words, but actually my site isn't about adult content. I suspect that's something that's really hard for algorithms to kind of do. So that's probably something where I would expect at least art algorithms or maybe search algorithms in general to have hard time there. What might help is the feedback on individual theories. You see things going really wrong or kind of you see results that are really irrelevant to some of these theories. And what might also help is to sort of read and help out the details that you're seeing there, which pages on your site will be relevant to some of these theories and maybe why, and maybe bring some suggestions on what we should be showing. So these are things that are definitely evolving not that we say we finished everything around safe search and nothing is changing anymore, but these are, I'd say, really hard problems. And that's something that I can imagine there's no absolute answer either way. One of the prominent themes from the last AMPCon was to stop thinking about AMP as mobile only. Will Google be looking to feature or test AMP cards on desktop at some point in the future? I don't know about the future. I don't know if we'll show cards for AMP in the future, but you can definitely make desktop-friendly websites with AMP as well. AMP is essentially a responsive design framework that you can use for all kinds of content. For example, the AMP documentation is all on AMP pages. And you can look at this on desktop. It shows up in the normal desktop search results as well. It's essentially kind of a website design framework that you can use for all kinds of different aspects. I don't know which of these will result in features being shown in the search results, but I primarily look at it kind of as a framework and think about, does it make sense for your site or does it not make sense for your site? And if it makes sense, then maybe try things out. So AMP is something that you can use on a per-url basis. You can easily just try out individual things and see how that goes. And if that works for you, expand that. If you run into problems, you can kind of reduce that or you can go to GitHub and post about some of these issues that you run into, all of that is possible. Let's see, question in the chat as well. Does AMP stories with no canonical tag technically considered as duplicate content? I don't know. I haven't actually looked into that, how that's set up in particular. So it's hard to say. If you're providing the same content in different formats then by using the kind of real alternate setup or the real AMP HTML, real canonical accommodation, then we understand that these two pages are kind of related and that we can pick one of these to show in the search results. But if you're specifically asking about it without the canonical tag, I suspect you are running into a situation that those aren't technically valid AMPs anymore because it's not like a self-canonical AMP page or you run into a situation that we see these as essentially two separate URLs with the same content on them. But I don't know the specifics of AMP stories and it's hard for me to kind of make a judgment call about that in general. Let's see, another question. If in such console, the number of pages showing structured data doesn't match those indexes is that an issue? I think that's a bit confusing in Search Console in the sense that we try to show a sample of the, or a relevant sample of the pages that we find there. It's not meant to be a comprehensive absolute number and that does make it a bit tricky when you're comparing the absolute numbers and you see, well, index status says there are 5 million pages indexed and the structured data report says there are 100,000 pages that have structured data but all of my pages have structured data. It's like, what is happening here? I think that's something that the team is definitely looking into to see what we can do to make that a little bit more consistent or a little bit easier to kind of digest with regards to what's actually happening there. What I would do in the meantime, though, is use those reports to recognize errors. So if you see issues in one of those reports and those are really issues that we run across, if everything is just okay, then probably things just are working fine. Can I ask a question about the Google Search Console? All right. There's a lot of confusion about, let me just see it, it probably drives you crazy about the beta, or I don't even know what it's called, the beta anymore, that it's basically not including all the features that people are using the old version. Obviously not going to produce a roadmap but I assume you know what you're going to be porting, like the fetch is Google and I don't know, certain types of links were ported and stuff like that. Are there any, any way that you guys could be a little more transparent about what is going to be moved and maybe what you for sure know is not going to be moved over so that people could prepare or maybe other tool makers can make their own tools around that? I guess it's specifically about what won't be moved in that case to kind of make it easier for other people to create something that replaces it. I don't know. That's an interesting question. That's something that we can definitely look into. I think they added a link to a forum recently where they're collecting feedback on what people actually want within the new Search Console. So that's probably a useful way to figure out what people don't care about at all. But it's also tricky in the sense that sometimes people want something that, from our point of view, doesn't make so much sense, like the old Keywords Report, where, from our point of view, that's something that was more misleading than actually useful. But still, a lot of people thought that was really interesting to look at. So that's tricky. Offhand, I don't know of anything that would be removed because we try to clean things up as we move along. What would probably happen is that some things get combined, like the Structure Data Report, the Rich Cards Report, and now the Rich Results Report. They're all kind of focusing on the same things. And some of them even use old names. So I could imagine at some point, it makes sense to take all of that and just make one clean report out of that rather than three or four individual ones. But otherwise, I'm not aware of anything that we would drop completely. I don't know. What are your thoughts? What should we remove completely? And kind of hand off to third-party tool developers. I'm just curious if there are any features that you are aware of that you guys are removing completely. And it sounds like you're not. I know people do want the toolbar page ranks for back in the show. Outside of that, I don't think there's anything specifically that you should remove. Honestly, the Keywords Report, I thought was useful in terms of detecting spam with hacked content on your site pretty quickly. Property sets. Property sets. So remove or? They're good. I like them. So I can see across all variants of a site in one go. I think that's something that the team wants to expand on with the new Search Console to make it easier. Because the feedback of things like I have these four variations of every website that I have, and it's such a hassle to figure out which one I have to look at, that's something that we hear a lot. So trying to find a way to make that a little bit easier to get the bigger picture of your website without having to figure out, is it HTTP, HTVS, or non-dub-dub-dub that I have to look at? That's kind of one of the things that I have to do. But also, John, when you're seeing like search traffic changes, and actually sometimes when you look at the whole set, you see actually it's just shifting around between them. It's no big deal. Whereas if you're having to, in your head, sum up four different values, go, oh, yeah, it was 300, 400, blah, blah, blah. That's why they're handy for me anyhow. Yeah. That makes sense, too. Yeah. And the change with the, I guess, request indexing, I find it very interesting. Can you tell us the reason for that quota change? So now it's like a daily quota for most part versus a monthly quota. Is there any, like, was it being spammed or something you don't want to talk about that you'll share now? It sounds like it was too useful in some sense. So that's, it's something where sometimes we get a lot of abuse to some of these features, and the team has to figure out ways to kind of deal with that abuse in an elegant way and that it doesn't affect most normal users. But it's still able to kind of catch these issues that we run into. So I believe, especially the submit URL feature is something that obviously you can use, submit spam as well. And some people have used that, for example, to submit hacked content on a website, which is all not really what the tool is meant to be used for. So finding a way to deal with that elegantly sometimes takes a bit of experimentation and trying things out. Got it. OK, so your previous saying was change because of abuse. Well, I don't know why else we would change it like that. I mean, it's like these subtle changes. I know, but you guys wouldn't say anything why you're kind of quiet about it. I'm like, obviously it's about abuse, but you could just say that. OK, then you figured it out. Yeah, now it's sometimes we run across weird things where we think, well, this makes a lot of sense for normal webmasters, and then suddenly it gets used in ways that from our point of view, it is not the way that it was meant to be used. Thank you. John? Yes. How long can we use the old version? How long can you use the old version? We currently don't have any timeline for turning that off. I think there are only three or four reports that are in the new version, so you almost always have to use a combination of the old and new version. And it'll probably be a while before we have everything migrated over. So don't worry about that disappearing next week or the week after that. OK, can I ask some first two data? Sure. OK, what will happen if we have server sites only to be one big company? And they use Shamrock about local business, and typing local business is not a legal name. All of the type of website have one legal name because they're owned by one company. This is the problem or not? I think it should be fine. OK, and another question for Stuttur data. What will be happen if we have multi-language website, like example five type of language? How we can use another type of business like description of the business? We can translate for every local business on the language or not? I think at the moment, most structured data types don't differentiate by language. So if you have a company name translated into different versions, I don't think that's something that we actually deal with elegantly at the moment. I would do that because it's not a pretty relevant type of name in translation. But it's not the case that we have kind of this complicated setup where you say, well, this is the same company, but it's actually the same company, not available. OK, thank you. Sure. May I ask you something? Sure, go for it. Thank you. So if we have a page taken down by DMCA, but we completely change the content on the page, and if we want to reindex it on Google, is this possible, actually? Because from what I write on the documentation, it is not. But what is the Google opinion on this situation? I think the problem there is that we don't want to be in a situation where we have to evaluate your feedback. So that's something where we essentially rely on this initial complaint when we say, well, no one has a complaint here. You need to figure it out and find a legal solution to that. That's not something that we want to be involved with on a per-year-old, first-aid basis. So my recommendation there would generally be, OK, I got caught for doing this stupid thing, or some user posted something stupid on my website, and I have to kind of accept that. And I kind of move on with my content and maybe use a different URL if you have something else that you won't have shown there. But kind of say, well, it is how it is. And there might be, I don't know, legal things that you have to do depending on what the local laws are. So I can't give you any legal advice there, but at least from a search point of view, it is how it's documented. Second question, if we have a HACCP website, for example, WordPress website. And for example, we have an injection code, but we have removed it. How much time is needed for Google to recognize it? So it's everything back to the normal. OK, so if it got hacked and you fixed the hack, I guess it kind of depends on what was the problem with the hack. Because there are some things that we flag in Search Console when it comes to HACCP content, either in the manual action section, depending on how we pick that up, or in the security issue section in Search Console. And for both of those, you can request a reconsideration, which means someone will manually take a look at your website and double check that things are OK, and then they'll remove this manual action or this flag that the website is hacked. And once that's removed, it's essentially a matter of kind of crawling and indexing as it normally happens, where we have to re-understand your content and figure out how we should be showing this website. For the most part, if you've removed any of the HACCP injected content, then that's something that will settle down fairly quickly. But if the HACCP content was available for a very long time, then obviously our view of your website is mostly, well, they're selling, I don't know, Nike shoes. Sometimes it could be completely different from the original content. Yeah. I mean, if the HACCP content is there for a really long time, then we think, well, this is the business. This is the website that they're trying to do. And then it's hard for us to say, well, they stopped selling these shoes, and now they're selling computer software. It's like, maybe they're hacked now. I don't know. So the faster you fix a HACCP, the easier it is for us to actually reprocess and deal with that. Or in other words, it depends on how quickly we have some of the problems. Yeah. Yeah, thank you, thank you. Sure. Let me see, there are some questions also in a chat as well. Since PSI and Lighthouse show new measurement parameters, do we need to use new KPIs to figure out where we should be? Especially with speed, it's something where it's hard to have an absolute measure and say, well, this is the right number and this is the number I need to focus on. There's lots of different ways to measure speed. And I think that's something that we change with the new PageSpeed Insights score, also with the Chrome usability report, I think that we use in the PageSpeed Insights report. And those are all essentially proxies for what users might be seeing when they go to your web page. And obviously, every user is a little bit different, so there's no absolute kind of number to focus on. But sometimes these individual reports point at issues that you can actually work on and improve. Sometimes if all of the numbers are good, then that could be a sign that you're actually really good. It could be a sign that maybe there's something unexpected that you didn't watch out for. For example, what we've sometimes seen is that sites that have an interstitial on mobile, kind of like this interstitial saying you should install our app, they might have really good speed numbers because all they have to do is show this interstitial that says install the app. But actually, to get to the content, you have to click past this interstitial, and then it starts loading the actual content. So the speed score, if you just focus on the number alone, isn't really representative of what a user would see when they try to get to your content on a mobile device. So you always kind of have to interpret the numbers that you see there and use your expert experience to figure out which of these numbers are relevant, which of these numbers are kind of misleading, and which of these numbers you want to focus on as a metric for maybe the next half year if you want to work on speed, for example. There is a question here. I was wondering which of the following solutions is the most effective in terms of SEO, having several pages with very specific content linked together by dispatch page, or having a single page that gathers in different tabs all of the content of the pages mentioned above. With regards to SEO, I think they're kind of pros and cons to both of these approaches. If you're just talking about a handful of pages, then I don't think there's really a big difference there. There might be more issues with regards to usability, and that can, of course, indirectly affect how your site performs in search as well. For example, if people can't share content that's within one of these tabs, then maybe they won't share your URL at all. And that's something that could be picked up on from our side as well if we don't see links to those pages because of them not being able to share that URL with the content that they want to share, then obviously we won't be able to pick that up. So that's kind of the indirect effect there. With regards to direct effect, it's pretty similar, I'd say. I don't think you'd have a big measurable difference there. I'm sure depending on the type of content that you have in those tabs, in those sections, there might be individual cases where going from one layout to the other does have a big visible impact in search. But for the most part, if you're talking about small pieces of content with a handful of tabs, it's probably very similar. All right. Let's see, if a website has backlinks as credits for using photos for copyright issues, how are these backlinks interpreted by Google? Is there a special algorithm to classify those links as credits, or are they regular backlinks? Those are essentially regular backlinks for us. If we pick those up, then that's a link that people put on their pages. That's a link. Let's see, my site doesn't show up when Safe Search is on. I'm 100% sure there is no explicit content. A couple of weeks ago, I submitted it to that contact form. How long does it typically take for a review? Usually, that's something that does take quite a bit of time to be processed and reviewed, and for sites to see any changes there. So even when we manually make any changes there with regards to what is submitted, it does take a couple of weeks for that to actually be live in the search results. So that's something where I definitely give it more than a couple of weeks. But what you could also do is go to the Webmaster Help Forum and kind of post your site there and get some feedback from the folks there with regards to what the site might be being picked up on, or if there may be some issues that you weren't actually aware of with regards to your website. For example, what we sometimes see is that a site has a lot of hacked content that isn't directly visible when you visit it as a Webmaster, but it is visible to search engines, and that could be affecting that as well. Is it true that a feature snippet shown in Search Console as position ones in all other positions are x minus 1, or I guess 1 further? We have a great help article on the positions that are used in Search Console and the Search Analytics feature. I double-checked that article, because there are lots of kind of subtle insights there with regards to how the position that's reported in Search Console is put together. So I kind of double-check that and look at it from there. Let's see. Does Google have a strategy to deal with fake reviews on local businesses? Is that something that will get more attention? Looks like Barry already posted a link there. So I also kind of double-checked the help center there. And if you're seeing things specifically with regards to your local business, there's also a Google My Business help forum where a lot of these people hang out and where they can give you a little bit more advice on what you could be watching out for, what you might be able to do there, and what you might just have to kind of live with and kind of deal with in the long run. So I double-checked that Google My Business help forum and see if they have any feedback for you there. All right. What other questions are on your mind here? Everyone's so quiet today, except Barry. OK. What else is on your mind, Barry? You want to just chat? No, nothing. Just relay to the Google Search Console team that overall the community is very happy with everything they're doing, and they should keep up the good work. So I know they probably see a lot of negativity out there. And being involved in software development, that could be hurtful on some level. So thank them. Cool. All right. I'll pass that on. Sounds good. Cool. Let me just double-check to see if there's anything kind of on top of the list that was also submitted. If you want, we can bring up the whole Google News meta keywords tag and how you guys are so safe. I have no idea about Google News. I always have to pass that on to other people, as you noticed. Yeah. Let's see. What about hyphen in a URL? Hyphens are fine in URLs. So from our point of view, you can use either hyphens or underscores. I think a lot of people prefer hyphens. They're a little bit easier to read. It's something where if the link is underlined, then you actually know what is there. So personally, I just use kind of the hyphens, like the minus signs, rather than underscores. But it's essentially up to you. The correct implementation of hreflang, I'm sure, that a page targeting another country won't appear in the case of another country. For example, if a page from another country is more relevant to this country, we'll see the show that page hiding in the other country. Oh, that's another country, yeah. So hreflang is a signal, not an absolute directive. So if we see an hreflang link between pages and we can verify that this is a valid hreflang link, then that's a real good sign for us to say, well, if one of these pages is ranking in the search results and we know the other one is equivalent and matches the user a little bit better, then usually we'll try to swap that out and show the other URL in the search results. However, it's not a guarantee that it'll always happen. So especially across countries, you always have to assume that some users going to your pages might be from the wrong country, so wrong country in the terms of what you consider right or wrong, not in regards to their bad country, good country, but with regards to what you're trying to target there. And it always makes sense to have some kind of a backup solution there. And that also applies if people go to your pages directly or random other sites linked to your pages. There might be someone from one country going to the wrong country version on your website. So you always have to have some kind of a backup solution there. Our general recommendation is to have a banner that you can pop up with, maybe JavaScript, that recognizes the user's location or user's settings and says, hey, it looks like you're from, I don't know, Italy. And we actually have a page specific for users from Italy. Wouldn't you prefer to go there? And that way, the user can choose to go to your page for their country. But they can also choose to stay on that page if they want to, which is really useful if a user is traveling to a different country, for example, or which is useful for Googlebot. When Googlebot crawls a page from the US, if you have a specific version for the US and you always just redirect them there automatically, then obviously Googlebot will never see the other country version. So by having a subtle banner, you have a backup for a lot of different various situations to kind of handle things with regards to hreflang. And hreflang itself, I say, is one of the trickier things to set up for a website that's kind of non-trivial with regards to its size. So as soon as you have multiple languages, multiple country versions, and you have content in a CMS that's not exactly connected everywhere, then it's sometimes really tricky to set up these links between those pages. And for us, we need to confirm that these links are actually correct, so we need to re-crawl all of those pages, recognize them as all those canonicals, and then we can apply the hreflang annotations in the search results. So that's something where I usually recommend taking time to figure this out with regards to your website and to figure out what actually makes sense for your website and not just blindly going off and say, oh, this is like a meta tag. I can just put it on all my pages and it'll just automatically work from the start. Maybe it even makes sense if you have a larger website to get a consultant who's specifically focused on hreflang issues so that they can help you to kind of figure this out a little bit better. All right, let's grab one last question here from the chat. Is there any SEO value in using the last modification date on a page? So on the page itself or in the HTTP response code, I believe we don't use that at all. We do use the last modification date in a sitemap file, but that's mostly with regards to actually crawling those pages. So those are kind of similar situations in that you both have a date, but they're different in terms of we need to actually get this page crawled and with regards to understanding what date is relevant for a page. So those are kind of the two different scenarios there. And then there's a question about Angular site with regards to crawling and appearing in Search Console. That's something that can be a bit tricky. So what I'd recommend doing there is posting in a special help group that we set up for JavaScript sites in Search. I believe we have a link from the last blog post that we did on the Ajax crawling deprecation. So I double-checked that blog post on the Ajax crawling stuff, and I believe there's a link to the Google group that we set up for this content. So I post there and include your URL so that people can double-check. And hopefully, we can give you some more details there. All right. So with that, we're pretty much at the end of our session. Thank you all for joining. Thanks for coming in, for asking all of these questions, all of the tricky questions even, and hopefully, I'll see you all again in one of the future Hangouts. So we have the next one set up in German on Thursday and English on Friday, and probably we'll set up more for the next couple of weeks as well. All right, thank you, everyone. Bye. Bye.