 All right, welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hours Hangouts, where webmasters and publishers can jump in and ask questions regardless of what's interesting for them and their sites with regards to web search. A bunch of questions were submitted already. But as always, if any of you who are here live want to take a shot at the first couple of questions, feel free to jump on it. A bunch of questions were submitted already. OK, so I have a question if nobody else wants to jump in. All right, go for it. All right, so congratulations on officially rolling out the mobile first indexing process. There are some people asking if what people saw over the weekend regarding like a March 23 algorithm update or something like that, if that was any way related to maybe what was being rolled out with the mobile first index or with the mobile first index, you probably wouldn't notice any ranking fluctuations. I don't know if you wouldn't notice anything. In general overall, it should be fairly smooth. But I believe they only started rolling this out like, I don't know, today, tomorrow, this week, sometime, not before the blog post went live. OK, so you're confirming the rollout really then, because nobody's really seeing any examples of this happen in search results. So you're saying when you launch the blog post first and then after that, you're pushing it out. Yeah, I mean, we already had some sites rolled over to mobile first indexing. This is just, I guess, a next bigger batch of sites that is happening. The plan is also to send the messages through Search Console. I believe they're a little behind there, so it might be that we roll some sites over this week and send them a message next week or something like that. But in general, you should get a message for that. OK, so most likely, the stuff we saw over the weekend was probably completely unrelated to this. And Google's not confirming anything that happened over the weekend as far as you know. As far as I know, so my guess is that's unrelated to the mobile first indexing change. Excellent, thank you. I have one more question. But if anybody else wants to ask a question, feel free. I had a question about Google for Jobs, which I've posted in the Google Plus page. All right. I'm not sure if it's quite a specific question. I'm not sure if you can answer it or it's for another team. Do you want to take a minute and read the question, or should I try and summarize here? Yeah, what would be the summary? So it sounds like you don't have a fixed location for the jobs? So we've got a fixed location. But what I'm finding is when I put that location in the markup and then preview it in the tool that Google provides, in some cases, in most cases, I'm not seeing that location appear in the preview. And I'm just wondering what the reason is for that, because it doesn't give me feedback as to why it doesn't appear. So from what I saw in the question, it sounds like you just have a name instead of a street address. Is that correct? So is that like a specific location that's called like that? Or is that more like a regional thing? So it's a specific location, like a town or a city, where that job is located. And we often don't have any more rich data than any more specific data. We just have a string, like London or Paris or? OK, so I believe that the team prefers to have an exact address for this markup. But what I saw somewhere recently is that they're treating this more as a warning now, rather than an actual failure for us to kind of implement or understand the markup. But what I would do there is definitely post in the Webmaster Help Forum about this. If you want to, you can send me the link to your question afterwards. And I'll point someone from the jobs team directly there so that they can take a look to see what exactly we could recommend there. OK, brilliant. I'll definitely do that. Thank you. All right, thanks. All right. Any other live question before we head off to the submitted ones? Actually, I have a question. If you'd like. I notice in my Webmaster Tools, the HTML improvements, that we have notifications of errors for duplicate, not errors, but their duplicate meta descriptions and long meta descriptions. And in all those instances, it happens to be that they're related to our landing page for tags. How does one avoid that? So how do you mean landing page? So it's a long post about a specific individual that's related to the tag. So if it's a celebrity for it. So in that, somehow or another is resulting in these notifications. But it's only those pages. It's not the actual posts themselves. So it's more like a category page, I guess. Correct, correct. OK, yeah. So that's perfectly fine. I think that's completely normal. We bring these notifications in Search Console, specifically the HTML improvements out so that if you aren't aware of putting unique titles on your pages, then that's something we'd like to point out there. But if otherwise, your site has unique titles and these are just a handful of pages that happen at the same title, or especially if they're category pages and they're paginated category page, like you're on page five of, I don't know, garden furniture, for example, then you'd still have the title garden furniture. You wouldn't necessarily need to make a different title on every page of this paginated set, for example. So from that point of view, that's perfectly fine to have like that. Thank you. All right. Barry, go for it. Last question that I'll stop asking things for today. So a lot of people have been reporting in the past actually a couple of weeks about new URLs not being indexed by Google, even using the fetch as Google, which has their issues now. I honestly thought it was just people complaining and just typical general indexing issues of new Google just choosing not to index that content. But I'm seeing a really large number of complaints, not just in the Google forums on Twitter, but from SEOs that do know their stuff. So I'm wondering if it's something you guys are looking into or if you found any issues around this. I've been looking into a bunch of those, but for the most part, it really seems like it's just working as normal. And sometimes we do index stuff very, very quickly. Sometimes we don't index stuff very quickly. I believe we're crawling a bit more than usual at the moment, so usually we pick these things up. One of the things that I've kind of noticed in looking into a lot of these issues is that a lot of people relied on the submit to indexing tool in Search Console where you manually submit URLs to be indexed. And that's something that kind of surprised me because I thought that the normal methods of submitting content into Google should just work. So I imagine, for example, you, with your site, you wouldn't manually submit every page that you write and say, hey, Google, I wrote this new article. It's like, pick it up quickly. But you just kind of put it in your feed, and your feed gets picked up and gets pinged automatically by the CMS. So that's kind of what we expect a normal setup to do. So that you wouldn't need to rely on any kind of manual work to get your day-to-day work done, essentially. But it's interesting that people relied on this so much. So I'm kind of wondering if there are things in the automated processes that we need to tune. So I have been looking into a bunch of these cases to see what kind of has been happening there in the past, why they might be seeing a difference there to figure out what we can do there to improve that. Yeah, I've been surprised also how many people are complaining that they can't submit URLs through the Fetch as Google tool, even like large publishers. And I'm like, I'm just surprised. So it's interesting to see that from the community. Yeah. I mean, it's good to see that the tools are being used. So that's on the other hand. So when talking with a team, we're like, no, you can't turn this off. All right, Don, I thought you had a question. Or you posted it. Yeah, I'm in a reception area at a friend's. So I don't really want to talk very loudly in case everybody turns around. So I've posted it. I think that's all right. Yeah, yeah. OK. So you're saying, I'm finding Google by crawling into areas which are blocked in URL parameters and also blocked in robots text. We should not be crawling anything that's blocked by robots text. So that's kind of strictly off limits for Googlebot, where if we understand a robots text file and we can parse the directives there, we will not go off and crawl that. The one edge case there is if you change a robots text file today, it might be that we just re-crawl the robots text file tomorrow and then we notice the change in directives. But otherwise, we shouldn't be crawling into those areas. But I've also defined quite a lot of rules in URL parameters, and they're still being crawled. And they're really clear. So yeah. That's more of a signal for us. So that's something where we do occasionally double check to make sure that we're not missing anything, that we're not skipping content that actually we could be indexing. So that's something where I wouldn't say it's on the same level as a robots text file. If it's in robots text, we definitely should not be crawling. OK. I'll double check. Am I able to send you a link if I find that it is still being crawled in the robots text? Yeah. Definitely. So I try to look into these as much as possible, especially with the robots text, because that's something that's really critical for us. If you put something in a robots text file and say, don't crawl this, then we should definitely respect that. OK. All right, then. That's great. OK. Hold on one sec, though. Sorry. It's actually, sorry, it's only certain parameters. So I'll have to think about how to do that, to be honest. Yeah, OK, because it's not that. Yeah, I'll have a think about it. OK. Perfect. All right. So now we have the simple question of what signals does Google's algorithm look into to calculate the amount of page rank passed through a link? For that, I don't really have an absolute answer. So we do look for the realm of follow to say we are not going to pass any signals through a link. But otherwise, it's something where we take into account a lot of things. And it's not something where I think it wouldn't make sense for a normal site to try to calculate how the page rank flows within a website, because there's just so many ways that links can go into a site and how URLs can be folded together and duplicated. And we're crawling, kind of goes off into dead ends and things like that. So that seems like something that's not really a good use of time. The use of background images instead of regular image tags, does it have any negative SEO impacts for web and image search? What's the recommended? What are the negative sides of using background images? Besides the fact that you can't add title or alt attributes. So I believe this is specific to using CSS to add kind of an background image within a tag in HTML. And from what I understand, we wouldn't pick up those images at all. So this is something where we'd say, well, this is part of your CSS, your design, and not actually an element on the page that is useful for something like web search or for image search. So my understanding is we would skip those completely. Also, we wouldn't have any information from title tags or alt attributes on those images. So for web search, that probably doesn't matter so much, because if we index an image or not, doesn't change anything for web search. But for image search, if we don't index that image, then, of course, that image can't be shown in image search. So if you do want your images shown in image search, I'd recommend using normal image tags so that we can pick up those images normally so that we can also understand the context of that image a little bit better, things like the alt attributes for the image, the captions on the image, text around the image. All of that really helps us to understand the images a little bit better. So if you want to make sure that your images are available for image search, I would not recommend using CSS background attributes to embed them into your pages. Does Google consider too many folder directories thin content? So the example given there is wp-content-plugin-plugin-name. There was an issue with one of our sites, and apparently approximately 30 pages are index folders. What kind of issue will this cause? What's the best way to get these pages de-indexed? Oh, OK. So I think it's not a matter of too many folders, essentially, but kind of these directory file pages where it essentially just lists the URLs or the files within a directory. From our point of view, we probably ignore that for the most part. We do index these if we run across them, but I wouldn't worry that this is something that we'll say your site is seen as low quality content for, because we probably recognize fairly quickly that these pages aren't relevant for anything, and we can just skip over them when it comes to search. So I kind of recommend blocking those directories in general, mostly for security reasons so that people don't stumble into, essentially, your plug-in directories and figure out which plugins you're using and which files they're using, what the configuration settings might be, all of that. Kind of blocking those for those reasons, but I wouldn't worry so much about it with regards to the quality of the website. It's essentially just like a simple HTML page that we're picking up there that shouldn't really cause any problems. I accidentally created a site move to a penalized domain, one that I had parked for a year. Now my site is no longer on the first page for anything. What steps should I take to recover my Google traffic? Also, is it possible because I didn't use a new domain that caused a manual action or manual penalty? So I guess there are multiple approaches that you can do here. On the one hand, you can cancel the site move. That's probably the first thing that I would do. I assume you already did that. You could also redirect back from the other site, if that's something that you can set up. You can also think about what it would take to clean up the manual action for the other site. So in particular, if you're now hosting content on the other domain, then maybe any kind of manual action that you previously had for that domain would be irrelevant, and you could just do a reconsideration request. And then suddenly, things are back to normal. So those are kind of the approaches I would take there. I think it's kind of tricky with the site move tool because it does check a lot of things before it actually accepts a site move. So you kind of have to redirect and all of that. So cleaning that up is usually a matter of redirecting back and canceling the site move. But it's probably something you want to avoid in the future. So if you have a domain that you know has a manual action on it, I would just get that cleaned up, even if you're currently not using that domain. Let me see. Seems like some questions are showing up in the chat here as well. Is there any term like text to code ratio in SEO? No. We don't use anything like text to code when it comes to Google search. We essentially pick up the visible content on the page and we use that. Some pages have a lot more HTML. Some pages have a lot less HTML. That's more a matter of your kind of design preferences, how you set things up on your side. How to do landing pages to the site by product type, but one main page is still ranked in Search. New pages are ranked by the query keyword, first brand, how can I fix the situation, how to rank pages by product type, and not the one main page. So usually this is something that just settles down over time in the sense that if you've significantly changed the structure of your website, then we need to recrawl all of that. We need to reprocess all of those URLs and understand kind of the context of those new pages within the rest of your website. And over time, that's something that will generally work itself out. So if you have normal linking in your site with a clean category structure, a hierarchy on the website, then all of that kind of just cleans itself up. So no need to do anything special there. All right. And now we have a question with a bunch of sub-questions. So let's see. On my marketplace, I'd like to add the vendor description on each product page. Will this be considered duplicate content by Googlebot? So technically, yes, that would be duplicate content. For the most part, if you have other unique content on those pages, that wouldn't be a problem, though, because then we'd still be able to understand that your page has some unique value of its own. And what kind of is the unique story there, the unique twist that means we should show it to a set of users who are looking for that specifically. And that would generally be fine. If you do want to take it a step further and just not be the same as all of the others that use the same description, then obviously having a unique description is certainly a good idea. On the same marketplace, I have 6 to 700 pages that were created a long time ago to improve SEO through massive amount of descriptive content about the different types of product and so on. The content turns out to be really poor quality, and these pages drive almost no traffic. I've heard such poor quality content could harm the entire website in terms of SEO. Is this true? Yes, we do sometimes look at the website overall to figure out how it kind of fits in with the rest of the web and where it would be relevant to show. And if we can tell that a website is primarily low quality content, that's kind of like spun content, rewritten content from other sources, then that might be something that we take into account with regards to how we show your website in search. OK, question goes on. I'd like to delete all of these pages and recreate new ones with higher quality content. Could I implement 301 redirect from the old pages to new ones? Will it harm my entire website in terms of SEO since they don't drive a lot of traffic? So I would recommend either deleting these pages or just updating them. I think redirecting them to new pages that you create is kind of an unnecessary extra step. But if you feel that you can add significant unique and compelling value to these pages, then I would just update them. Or if you think that these pages have actually no sense to keep for the long run within your website, then maybe just delete them. OK, still more to this question. I'd like to index all the product pages I mentioned as soon as possible, about 2,000 pages. Should I do both at the same time or wait until the product pages are crawled and indexed to start deleting content pages? You can do this in whatever order you want. There is no algorithm on our side that says you have to do everything at the same time. Otherwise, it doesn't count. We reprocess the website over time, and we try to take that into account as we see it happening. And then one last question. Last week, I attended a Google event in Paris, and the speaker mentioned that neither traffic nor bounce rate were taken into account to rank the search results. How can it be possible? It's possible. I don't know how to say how, but it's certainly possible. It's not something that we would take into account when looking at individual search results pages. Sometimes we do take things like user behavior into account when we evaluate algorithms overall, but then we're looking at millions of search results, and we try to compare how these algorithms are competing with each other and which of these algorithms is working best or which of these settings with algorithms is working best. But on a small level, that doesn't really make sense, at least in our experience. What about short clicks? Short clicks. That's essentially the same thing. So that's something that we would probably take into account when it comes to evaluating algorithms overall. But on a smaller level, that's really tricky. I don't think it provides a lot of value there. Oh, it's more user satisfaction with the results overall. Yeah. OK. Does mobile-first indexing mean Google will give a full weight to hidden content, even if it's never accessible to users? Or is there a difference between not visible in the initial view and not accessible at all? How does Google handle a page with 5,000 words of hidden and only showing a call to action to the user? It reminds me of spam and white text on white background, kind of the old school SEO tactics that used to work. So from my point of view, this is something that we have covered fairly well with regards to understanding which part of the page is relevant to the user and with regards to understanding where keyword stuffing starts. So from that point of view, I'm not necessarily that worried about the situation. I'm sure there will be some people that will try to stuff as much keywords onto a page as possible. But for the most part, we figure that out fairly well. And even on desktop, it's not the case that we would send a manual action to a website that does this keyword stuffing. We have algorithms that just try to ignore it and focus on the rest of the page. So from that point of view, I'm not necessarily worried. I don't think it's a great practice to just hide content on a page and assume that search engines will treat it as being valuable and users will never be interested in it. Because if we show it to users in the search results and they go to your site and they're like, oh, there's nothing here that I want, in the future, they're probably going to avoid your site because they feel they were misled by being sent there. But actually, they can't find any information to what they were looking for. Let's see. I'm trying to integrate Google for Jobs. OK, this is the one I think we touched on in the beginning. Struggling to find someone at Google to provide an example of a website which successfully manipulates Google's algorithms by simply mixing paragraphs of content as it is from different well-established sites. We've submitted a DMCA for a sample of 10 pages and Google removed those. And I think it goes on that we don't want to submit hundreds of DMCA requests. So essentially, this is something where you probably need to go through the web spam form if you think that this website is manipulating things with regards to search results. Or if it's copying your content, then this DMCA process is kind of the right one to do for that, at least in most cases. So I'm not really sure what else we could do there. So you're welcome to see me examples of something that you see like this. And I can take a look at that with the team. But in general, we wouldn't remove a website just because there's some copied content on this website. That's something where you'd have to go through the normal process with the DMCA, where that would be done on a per page basis. And if there are issues with other pages that are kind of OK on this website, then those would remain in the search results. But again, you're welcome to see me some examples so that I can take a look at that with the team here to see if maybe there's something that I'm missing. We have a new, brand new website with much better content than the competition, much better backlinks, and much many more backlinks totally optimized for technicals and speed and structure. But still, we're not ranking for important keywords. Is there really any sandbox or what could be happening here? So I guess first off, I'm really kind of worried the way you phrase this question in the sense that you're saying this is a brand new website and it has much better links and many more links than all of your competitors. That sounds, I don't know, kind of problematic. Like how would this happen with a new website? So that's just kind of an aside there. In general, when it comes to new websites, it is tricky for us to figure out how we should be showing these in search. And in some cases, we don't have a lot of signals and we have to kind of guess at where we think this website would be relevant in the search results. And over time, that will settle down. And that could be that we start off fairly conservative and it settles down at a higher state. It could also be that we start fairly high and it settles down a little bit lower in the search results over time. So this process is completely normal. It's not something that I would call a sandbox or that we're artificially holding back new sites or anything like that. It's really just a matter of us not really knowing in the beginning where exactly we should be showing the site in search. And it takes a bit of time for the algorithms to figure all of this out. But again, kind of going back to your initial question, I really kind of be worried that maybe you're doing some things here with your new website that you shouldn't be doing. So I really kind of rethink what you have set up there with regards to your new website if that's really something that's performing extremely well in an organic way or if you've been doing things that in the long term will cause more trouble than actually helping your website. So that might be something worth getting some tips from some other people who've worked in similar areas before. OK, so two questions. I want to know if Google have any plans to send no result pages, again, with some keywords or if that was only a test. I don't know what the plans are. So I think in general, we at Google move fairly quickly sometimes. And sometimes we try new things out. Sometimes they don't work out. Sometimes we try them out in a different way, and they work out a little bit better. This is something that I think every website should be doing. They should constantly be rethinking what it is they're providing, listening to feedback, and adjusting what they're doing based on feedback, constantly testing something new and seeing if that works better. So I would be surprised if we kind of took this test and said, oh, we'll never do anything like this again in the future ever, but rather we'll try to learn from it and see where the problems were, where things could have been improved, and how it makes sense to perhaps find ways to improve the user experience for users in general in a way that works well for all participants on the web. Second question, what types of pages do you recommend to use AMP for on mobile websites? AMP has a lot more functionality than when it was initially launched. So while in the beginning it made sense to focus more on content-type pages for AMP, nowadays you can do a lot of really fancy stuff with AMP. So I wouldn't necessarily try to limit where you use AMP, but maybe think about what kind of content you have on your website, what kind of functionality you have on your website and where you think AMP might be able to help you with that. And that's something that I think kind of differs from website to website and also differs a bit based on your experience, your resources, what kind of a team you have that can implement things. For example, it probably doesn't make sense to just drop all work on your website and only implement AMP. Probably you want to find some kind of a split and figure out this is the team generally working on the website overall, moving things forward, and maybe some people that are trying new things out and seeing how they work and how that evolves over time. So with that said, I don't really have an answer of where you should be implementing AMP, but I do know there are lots of places you can make really fast websites or really fast pages using AMP technologies that you probably profit from in the long run. Are there negative consequences to adding a lot of links to other sites within one article? My site would like to offer as much transparency as possible as to how we arrived at our conclusions with fact checking other outlets, but we also don't want to hurt our own page rank in the process. That's perfectly fine. You can link to lots of sites on your website. That's something that definitely works for us to also understand better the context of your website within kind of the bigger web itself. The one thing I try to watch out for is if there are any links that could be perceived as being kind of paid or kind of being placed there with regards to maybe some exchanges that are happening in the back, that's something where you'd probably want to use the realm of follow microformat on those links. But otherwise, if these are organic links, normal links within the articles on your website, I would just go for it. I think that makes perfect sense to link to other sites. The only follow-up I have on that is that my understanding is that in the fact checking community, it would like us to do sort of like footnotes on the bottom of where the sources came from. In that instance, I would sort of almost have to do that link again in the footnotes. Do you have any suggestions of how, you know, if I were to say that this came from, let's say, People Magazine and then on the bottom say this quote was from People Magazine more transparently, is that a problem? Because now I have duplicate of the same sort of link. That's perfectly fine. That's definitely not a problem. OK, thank you. OK, let's see. Long question in the chat. If we have a site that was hit by the March 7th to March 9th quality updates, that is a legitimate site that is generating world-class content and getting high-quality links to the content, but the content gets scraped. And now the content after the update is ranking for other scraper websites first before our own. I guess trying to figure out what is going wrong here or what could be happening there. I guess what I would recommend doing there, first of all, is posting in the Webmaster Help Forum to try to get general advice from other people to see if there's something specific that they could be pointing out, something maybe that you've overlooked that you can either change on your side or react to the way other sites are acting on your website to try to handle that a little bit better. In general, I think the update you're referring to is one of our core algorithm updates where essentially we're just trying to improve the relevance of the search results. It's not the case that we're saying your website is bad. It's just our algorithms essentially saying maybe it's not so relevant for the queries that we used to show your website for. Obviously, as a Webmaster, you probably have strong opinions about that. And that's, I think, perfectly understandable. I would, in a case like this, so maybe go to the Webmaster Help Forums and show some of the queries where you felt your site would be relevant for, so that other people can take a look at that as well to better understand how your site is kind of interacting in the bigger picture of the web. And the folks in the Webmaster Help Forum can also escalate issues to us if they see things or they say, well, this looks really weird. This shouldn't be happening like this. They can escalate that to Googlers and Googlers will be able to take a look at that as well and figure out if there's anything on our side that we need to change there, or if there's anything maybe technical or kind of from a quality point of view that we could point out to you that you could improve on your website in general. So that's kind of what I would recommend doing there as a first step. Can you please emphasize more on technical SEO? What are the factors of technical SEO? Wow, I think that's like a giant open-ended question. It's like, what is SEO, essentially? So I don't know where I would start. So I guess what I would see as technical SEO is everything that's involved up to maybe indexing when it comes to search. So crawling a website, understanding how we find URLs on a website, how we can extract the content from the pages, extract metadata from the pages, and store all of that in our index. And kind of what happens afterwards is mostly, I guess, a matter of ranking, which takes into account lots of different signals from lots of other places. But from a technical point of view, I feel for the most part it's those first couple of steps in the pipeline. But I am doing and ask me anything on Reddit. I think that's tomorrow, a bit later in the day tomorrow. So maybe jump in there and ask there, because that's specifically around technical SEO there. So check out what kind of questions come. Maybe ask your own questions there, too. And maybe that'll help to kind of give you a little bit more insight into what people generally see as technical SEO. I think another tricky aspect here is, of course, that everyone has different view of what SEO should be and how far it kind of goes. So finding that one perfect answer that tells you exactly what it is is unlikely to have. I've worked on one domain for eight months. After the domain expired, I accidentally bought the wrong domain instead of my premaced one. And all my work was gone. So what can I do to kind of restore my rankings there? I don't think you can do anything really magical to make that come back. You'd have to get the other domain as well and redirect to your new domain, essentially. Because otherwise, what happens is it looks like to us like there was this great content here, and it disappeared, and suddenly the same copy of the content appeared somewhere else. And sometimes we can draw a connection there and say, well, maybe this belongs together and we should treat them as one thing. But it could also be that some random other person just copied your content and put that on. And then wouldn't be the right thing to just say, well, it's probably the same thing. We'll just rank it in the same way as it was before. So I think in a case like that, either you kind of have to bite the bullet and say, well, kind of redo those last eight months, which is not an eternity. So definitely something that's reasonably possible. Or figure out a way to get that old domain name back so that you can redirect properly to your new domain name. I notice in search results that sites who showed up on page one for domain name and a few posts no longer display the posts on page one. It's like the change booted the posts off. I only see the home pages of a lot of sites. Is that normal? I am not aware of anything in that regard, which has been happening. So sounds like maybe these are just normal search changes, or maybe these are changes specific to individual sites. If you feel the quality of the search results has gotten worse because of a change like this, I would definitely go ahead and submit feedback on the bottom of the search results page. There's a feedback link, and you can tell us kind of what you were searching for and why you were unhappy with what happened there in the search results. If you're seeing this across the board for a lot of cases and not just individual sites, but a lot of situations where you think that this is really broken and Google should be doing much better in the search results, then you're welcome to kind of send me that as a pack so that I can take a look at that with the team here. For individual queries, it probably makes more sense to just do that with the feedback link. But if you're seeing something bigger that you can't really pack into individual feedback submissions, then you're welcome to send me that directly. The easiest way to do that is on Google Plus, which is sometimes a bit quirky with regards to how to add me to a thread privately, but probably the best way to get that to me. Mobile first indexing. OK, a mobile version of a website with less internal links. So fewer top menu items is expected to have ranking problems due to less link value passed through internal linking. So if you have different content on mobile than you have on your desktop, in particular, that would be you don't have a responsive web design setup. Then when we switch to mobile first indexing for that site, we will use the mobile version for crawling, indexing, and kind of passing the signals internally within the site. So in an extreme case where you have no links at all on the mobile page, and you have normal linking structure on the desktop page, then we wouldn't be able to crawl that page anymore, or that site anymore, because we wouldn't be able to find all of the pages anymore when we do that with the mobile device. So I would expect, in a case like that, to see some changes in search if you have significantly different internal links on mobile than on desktop. Whereas if you have a responsive web design, even if not all links are immediately visible, that's something that you don't have to worry about at all. That essentially just works. We do take into account how a site is linking when we switch to mobile first indexing, though. So if we see that a site has a terrible internal linking structure on mobile and a reasonable one on desktop, then probably we wouldn't be switching that site over to mobile first indexing just yet. Do you think that SEO is a long-term career? I don't know. All of these SEOs are just so young. They just briefly started work on this. At least they all look so young. So I don't know. We'll have to wait and see if this lasts more than a couple of years, I guess. What do you all think? SEO is like long-term thing. It's dead. SEO is dead. Oh my gosh. No, that's terrible. Don't tell me that. I think there's lots of stuff that can be done. With regards to SEO and understanding how the web works and how things work online. There are a lot of technical things that need to be done, even if search engines were completely artificial intelligence powered and were able to guess all of your URLs. You'd still have to put that content on there somehow and explain to search engines why your content is the best version of that. So I totally don't see that going away. It will evolve. Of course, it has evolved quite a bit over time. When I think back at the beginnings, it's like everything was just a black box. And nowadays, there's so many technical things you can just check off. And you kind of know this is actually something that search engines care about. So it definitely changes over time. So if you want a job that will be the same for the long run, then maybe SEO isn't the right thing for you. But if you're happy with challenges and figuring new things out all the time, then I don't know. Cool area to be active in. Does it matter where the internal link is placed? For SEO purposes, for example, are internal links placed in the sidebar as powerful as in-content links? Or will internal links carry more value if they're placed higher in the content? In general, the location of the link doesn't matter so much for us. We do try to figure out what the context of that link is to better understand how we should connect those pages. For example, if we notice that a link is in the comments on a blog somewhere, then we'll probably assume this belongs to some blog comment. And maybe we'll treat that slightly differently than if it's something that's within the top body of your article. But for the most part, when we crawl through a website, we try to just treat it as a natural link when we come across links and follow them normally. Let's see, Lighthouse audit extension for Chrome. Should I be taking note of the Progressive Web App results for my website? Or is this only for downloadable apps? So Progressive Web App is essentially a way of making a website that fulfills certain attributes, such as being able to run offline, letting you add it to the home screen on mobile, and I think that it uses a service worker. I'm not actually sure now that I think about it. But it's essentially a normal website. And you can use those audits to test for your normal website as well. Some of those audit results will probably be irrelevant to you. So if you don't have any offline functionality, then you probably don't care if the audit says you don't have any offline functionality. But sometimes there is useful information that comes out of these audits where you might see some speed aspect that's highlighted that you forgot about, or maybe some other aspect that you missed. We recently added some SEO audits to the Lighthouse setup as well. I believe they're going to show up in Chrome or already showed up in some Chrome versions as well. So for example, if you're developing a web app for the first time and you want to make sure you have the basics of SEO covered, that might be a good way to get started. The neat thing about Lighthouse is also that you can run it from the command line. So you could theoretically run this on a regular basis and just check your important pages to double check that things are set up properly with regards to the audits that you choose. So that might be a neat option for the more advanced people as well. Let's see. A question about cache that uses a different URL than the actual page. In what way can that affect rankings? So I'm not quite sure which cache you mean there. In many cases, if you have something like a rel canonical on a page and the whole page with that rel canonical is cached, then we'll just, when we crawl that cache page, we'll follow the rel canonical and we'll generally use the canonical version instead. So if you set it up like that, I generally think that wouldn't be a problem. What's the meaning of not set in Google Analytics? So I believe this refers to the data from Search Console when it's transferred to Google Analytics. And there are some queries where we don't provide the query itself, just mostly for privacy reasons. For example, if a query is very rarely used, then we might be kind of cautious about providing that to Search Console directly and from there also to analytics. So that's kind of where that comes from. In the Search Console Help Center for the Search Analytics feature, there's a little bit more about kind of the queries that we filter out there. If we take an example site that operates primarily in English but has a smaller Russian site too, so it sounds like you're just linking between some of the Russian pages and some of the English pages, do you have to do anything special to make that work? So in general, what I'd recommend doing there is trying to figure out if you can set up the hreflang links between those versions. Those help us to understand that these pages are actually equivalent but in different languages. However, for normal kind of content queries, we can usually figure out which language the user is searching in from the query that they use. And we can tell how your Russian pages or maybe your English pages are relevant to that specific query. So for the most part, that probably would just work. What would probably be trickier is when it comes to brand queries. So if someone is searching for your company name, probably your company name is the same name in Russian as it is in English, then from that query, we wouldn't know which of your pages would actually be the most relevant. Because we don't know, are they looking for the English home page or the Russian home page? And in a case like that, maybe we would show both. Maybe we would just show one of the two pages. And that might not be exactly what you want. So if you see that this is happening with some queries, probably more with like brand queries than anything else, then maybe it makes sense to just set up hreflang links for those individual pages if you can't do that across the whole website. So that's kind of what I would aim for there. All right. Wow, these questions keep coming and coming. Let's see how far we can go. I'm seeing a huge traffic coming from AMP URL suddenly. When I check the URL in Google Index from mobile, the article is not appearing. And when I check with the site, the non-AMP version says not indexed. And it's not sharing anywhere. Just the published articles. Is there anything I'm missing here? I don't know. Like getting links and traffic from AMP pages seems like something that wouldn't necessarily be totally out of the question. So it feels like that might be completely normal. What I'd recommend doing here, though, if you're really confused and not sure what is happening is maybe posting in the Webmaster Help Forum to see if other people can give you some more tips on what you could be looking at or can kind of show you how this is or is not something that you need to worry about more. Yesterday, it was observed that the URL error graphs in across all monitored accounts, there are two data points. For one day, is this an error or a glitch or a symptom of mobile-first indexing or something else? So in general, mobile-first indexing, you wouldn't see anything like this. We would just switch from one version of the content to the other across your website. So that wouldn't be reflected in any of these error reports like that. I suspect it's just some weird data glitch. I don't know what the plan is there from the Search Console side if we're just going to fold those together, or maybe they are folded together already. I haven't seen any more reports of that recently, but I'll double check with the team, actually, on that. Let's see. Do you have any advice where I can get most respectable information on how to correctly write about a business who definitely works in different areas? There seem to be lots of misunderstanding here, but this business definitely works in different cities. For example, commercial door installations. So in general, I would write your content as naturally as possible. And if there are individual locations where you're active in, then writing about that, I think it's fine. If you're doing this at a large scale and including hundreds of locations, then probably it makes more sense to use fewer individual locations and talk about regions instead. Because otherwise, you just have this mass of pages that are essentially just all the same thing. So I try to find a reasonable ground with regards to what you would provide to maybe someone visiting your store in person with regards to the content there. All right. Wow, I think we actually made it. Oh my gosh, more questions in the chat. Is there any kind of suppression for long pages of mobile devices? I don't think so. Not that I'm aware of. Some pages are just really long, and we have to figure out how to deal with them. Sometimes, for example, a PDF can be hundreds of pages long, and we still have to figure out how to index that content somehow. Let's see, what's the difference between KnowledgeGraph and Google My Business list in terms of search and search query? Those are kind of different things. So from that regard, it's very different things. Sometimes we do show information from Google My Business in the KnowledgeGraph sidebar as well. But essentially, those are different kind of topics to look at. All right, I think we made it to the end. Is there anything specific on your mind that I missed out, that I need to cover before we head off into the next batch of Hangouts next week or otherwise on Twitter? John, you know on that topic of the longer pages on mobile? So that was something I was just reading in an IR book recently. They are normalized, though, aren't they? Long pages and short pages are all normalized so that they're having to doesn't necessarily fall over because you're going to have a huge page. You're going to run outrun for short page because of normalization. I mean, we need to figure out how to deal with long pages that have a lot of different content on it, and short pages are very focused. And I don't think you can always say that a long page that has a section in for the same content is on a short page, it would be exactly the same or different or how that should be read. But there's some sort of normalization suppression, isn't there? Probably, yeah, to figure out where it makes sense to include more, where it makes sense to include less. I think that's totally normal. You've been reading a lot of these cool technical books. That's fantastic. More SEO should be doing that. Yeah, definitely. Thanks, John. John. Yeah, John? Couple questions. One, something I've encountered in the past over the years is our sites that are primarily image based or one particular case, it was an HTML5 site and there's a lot of interactive elements on the page. And the question always came about was do we need to supplement this with content, written content? How are those dealt with? Probably, yeah. So if there's very little content on a page, then it's really hard for us to understand what this page is about. And an extreme case that I sometimes see is a photography portfolio websites where essentially each photo is on an individual page and the textual content is like the camera settings. And that's something that for us is really hard to understand, what should we be ranking this page for? Is it relevant in web search somehow? Is it relevant just in image search? And for image search, what should we be showing it for? We don't have a lot of information there. So having some amount of textual information on a page, I think, is really critical. OK, the second question, I'm not sure my questions are going through on the other section that we posted. Oh, you put the first thing? Correct. I don't know. I have a tool to kind of pull them out for me, but I don't see any of your questions left. So maybe they're still there somewhere. Well, I asked last time too, and they were kind of skipped, so I'm thinking there might be some. OK, I'll double check. OK, thank you. Cool. All right. Oh my gosh. Really long question. Oh my gosh. Really long question. Let me see if I can summarize it. I work for a company in health and wellness that has professional writers and a lot of UGC. I guess kind of in direction of how would Google understand the quality of the content based on the subdirectories or different parts of a website where you might have experts writing about things and you might have user-generated content on similar topics. I think that's, in general, a very tricky situation because for the most part, we do try to understand things on a per-page basis, but we don't understand the full context on a per-page basis, and sometimes we will look at the bigger picture. And that often means that we look at the website overall to try to figure out how is the quality of this website overall. So if you're mixing a lot of, let's say, extreme-case low-quality user-generated content with really high-quality expert content on your website, then sometimes our algorithms might not understand where this barrier is between those two sections and that we should be treating these differently. And then it might be tricky for us to show the high-quality expert content in the right place. So that's something where it might make sense to think about what you can do to make it clear which parts are user-generated content, which parts are really your expert editorial content that you're providing. That could be something like maybe a subdomain. It could be maybe in an extreme case, even putting it on a different domain or finding other ways to at least give us more information about the content that you're providing. Other approaches that we've also seen from people who run forums, for example, is that they try to figure out which parts of the UGC content are actually high-quality UGC and which parts are lower-quality UGC. So that could be something where if you have some kind of tracking mechanism on your site that you can track to see which parts of the UGC are good and which parts are bad, where you could say, well, I'll just put the lower-quality UGC on no index to start off with, perhaps. And as I get information that this is actually really useful, then I take that no index away. So this is something that I believe we do in our product forums as well, where if someone just comes in and posts a bunch of stuff in the beginning, then we'll put that on no index. And if people interact with that and say, oh, this is actually a good question and a really insightful thing here, then we take that no index away and have it indexed as normal content. So that might be something to look into as well. But it really depends on the website and what kind of UGC you're talking about. If these are random people from the internet giving medical advice, then probably it's not a good idea to make that in with accredited doctors also giving medical advice. But if these are really high-quality UGC things that are being provided there, maybe that's perfectly fine to keep those more related together. All right, we've kind of run out of time. I have the next batch of Hangout set up, I think, for next week. And so if there's anything left on your mind that you want to add, feel free to jump in there. Also, as I mentioned, the Reddit thread should be going up tomorrow. So if you have any technical SEO questions, feel free to jump in there. I'm sure that'll be interesting. And as always, in the meantime, feel free to ping us on Twitter or post them in the Webmaster Health Forum. Lots of people there who can help out with a lot of the more general questions around web search and escalate the quirky things that people run into as well. All right, thanks a lot again. And hope to see you all in the future again. Bye, everyone. Thank you so much. Thank you. Bye.