 All right, welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hours Hangouts, together with webmasters, SEOs, publishers, anyone who has a question around search, any topics like that. As always, a bunch of questions were submitted before, so we can get through those. But if any of you want to get started with a question of your own, feel free to jump on in now. Can you hear me? Yes. OK, my name is Sharon. And I work for a software company that has about 35 international websites. And we have several that are in the UK, India, Australia, New Zealand, use the same translated English. So we're where we have duplicated content. We use self-referencing canonicals, and we also use XML site maps with hreflang that have reciprocal links. And recently, we noticed on some of our software subscribe pages that we are aware that use a lot of client-side JavaScript. And so we've been using a tool called Prerender to render some of those pages. And then when Googlebot and other search engines come, we show the Prerender version. We recently noticed that Google was indexing the wrong site for some of those subscribe pages. So if you went to the UK page and we checked the Google cache, it would show the India page. And the way that we were aware of this is we use microdata, structured data for the billing, the price. And on the SERPs, because we're getting that snippet, for the UK page, we were seeing the Indian rupee. And we're like, that's not right. So we checked the cache on several of them. We noticed that Australia would have an India page and UK would have an Australia page. And we weren't quite sure we were hoping it was a Prerender issue, but we wanted to come here and rule out that it wasn't something maybe Google was seeing duplicate content and caching the wrong page. Now, so generally, that can happen when we think that these pages are the same. So if we think that these are really identical pages where the content is the same, then what might happen is we'll understand the connection between these URLs. But we'll just index one of these pages. So we'll index maybe the Indian version. And we'll know there's also a UK version based on the hreflang. And we'll swap out that URL in the search results to send users to the right version. But we'll just have one version indexed. However, it sounds like we shouldn't be folding those pages together because they're not really identical. Like you mentioned, the prices are probably different. So I see perhaps two things could be there. One is that we might just be getting it wrong, which is always possible. In which case, it would be useful to have explicit examples where you have a query that pulls up these pages and which of these pages were getting wrong so that we can look at that with our team here to see what is happening here. The other thing that might be happening is something with the client-side rendering. Like you mentioned, I don't know how you have it set up at the moment, but I know a lot of the pre-rendering tools use the hash bang setup with the escape fragment to provide the pre-rendered version. And on our side, we've essentially stopped recommending using the hash bang format. And we're switching to crawling just the hash bang URL instead of the escape fragment version. So if you're only serving the pre-rendered content through the escape fragment version, then chances are we'll try to be rendering the pages ourselves. And if we can't render those pages ourselves, then perhaps we have a simplified page that we're using as a basis for indexing, which might result in this kind of duplication where we see, oh, this base page that we're using for indexing is the same as the other one. Therefore, we can fold these pages together. So I'd recommend double checking the way that you're setting up pre-rendering, that you're really doing that based on the user agent, for example, and not with the escape fragment hash bang setup. If that's the case, if you're doing that properly, I don't know, send me some examples so that I can take a look at that with the team here. OK, what's the best way to send them to you on the forums The forums is an option, or on Google+. You can just send it to me directly. Thank you. Thanks. All right, anyone else before we get started and jump into the questions? Hi, John. I have a question related to the snippet of the description. So I noticed that the description has recently been shorted to, like, 150 characters. I was wondering if my content still descriptions still have, like, 200 characters or even 20 characters. If a user is searching for a certain query, the keyword is, like, we've got four lines, and the keyword is around a four line. Will Google still picking out the contents within those 200 characters? Sure. So I would not recommend just jumping in and changing a length of the description meta tag just on what's currently shown in the search results. This is something that's very dynamic, that can change over time, it can change per query or per device. These things are not something that's really worthwhile to kind of target an explicit number for. So that can change. And also, the description meta tag isn't something that we use for ranking. So if your page has content on this keyword, then I wouldn't worry so much where that keyword is in your description meta tag. Instead, just make sure that the description that you provide is really short and to the point and provide some information for users who are looking for something like that to better understand how your pages might fit in there. All right, and I think with that we have the most controversial topic covered. The change in description snippet. Let me run through some of the submitted questions. And as always, if there's anything on your side that's unclear along the way, feel free to jump on in. And we can look at that as well. On our website, the business team would like to add a redirect from our mobile site to our app if the user has already our installer app. The redirect uses deep links to redirect the user from pages on the site to the matching screen on the app. As we already have app indexing set up, we exclude users from organic search. The redirect won't redirect Googlebot either as we want to ensure that the mobile site can still be crawled. We don't want to file a file of Google's sneaky redirect guidelines. Is this OK? Essentially, this is OK. Lots of sites set it up like that. The important part is really that users, when they come to your website, through the search results for the first time that they can actually read your content and get to your website, that they're not automatically pushed to an app install screen. You can definitely use an app install banner on top of the page or on bottom of the page to let users know that you do have an app. But the mobile site should be accessible just as well as a desktop site when it comes to search. Here's that question about descriptions. We can skip that one now. Is there any way to resubmit our website for reconsideration or manual review from Google Webmaster SEO team? Our site was first directory to go SSL, has HTTP2, and top 5% in Alexa speed rankings. We may have upside our ranking previously, but fixed all technical issues and are looking for a way to trigger a manual SEO review. We don't do manual SEO reviews. So essentially, if this website is doing everything right, then our algorithms will pick that up over time. We don't look at things like who was the first to do this or the first to do that. It's really a matter of the current site and the way that it's relevant in search results to users at the moment. If you had a chance to look at our AngularJS site, I don't think I looked at the AngularJS site. So what I'd recommend doing with AngularJS sites, in particular, if you're worried about whether or not we can index it properly, maybe post in our JavaScript sites working group to get advice from other peers who've done similar things. The rest of the question says there is a title, description, canonical tags, are also dynamically generated and served with a DOM. One thing here to note is that we don't pick up the canonical link element from the JavaScript rendered version. So we'd only pick that up from the static HTML version that we pick up on initial crawl. So if you need to serve us a canonical tag, then make sure that's in the static version. Or if you can't serve it there by default, maybe consider finding a way to pre-render those pages so that we can pick that up from there directly. In many cases, that's not really necessary, but I kind of look at your site to see what the situation is there. Would you recommend adding hreflang to pagination? That can be a lot of pages on larger-scale sites and doesn't seem worth it, since Google will often serve as page one in a paginated set. Is it worth it? It's really up to you. So hreflang, in general, is something that will swap out the URLs when we see someone searching with a specific language and we see the wrong version otherwise ranking. Then we'll swap out those URLs in the search results. So if you never see those URLs showing up to users or to the wrong type of users, then probably not necessary to add hreflang links there. So that might make it a little bit easier for you. Can you tell us what the importance will be for sites to start becoming PWAs along with the use of AMP? With speed becoming more important each day, would you think we're going to start seeing a trend occurring where many e-commerce sites are going to load up as PWAs, with their informational content loaded up via AMP? I don't know so much about predicting the future. These things are really kind of they can change over time. In general, I think the PWA setup is something that makes a lot of sense and makes it possible to have really fast sites. So it's not something I'd discard by default, but at the same time, I don't see this as something that all sites need to do. So for some sites, it can make sense to have something like a PWA that provides offline functionality. That's more like an app that's kind of installable on the home screen, those kind of things. But that's not necessarily the case for all websites. So with that in mind, I definitely look into what PWAs could do for your website. And to consider them if they make sense for your specific use case. But I wouldn't just blindly use any technology just because it's out there. What kind of video result is this? I don't know. I'd probably look at you to double check what you're showing there. I need to double check. I have a site where I delay loading of videos for performance reasons. They play when a user clicks on the poster image. I'd still like to use video schema to show where the videos are on the page. I'd also like to add them to video site map. Would that be acceptable or ignored if Google has trouble seeing the video embed code? For the most part, I think we'd be able to pick that up, especially if you have a video site map to let us know about that and to give us the critical information there so that we can crawl and index that page directly, the video content. So that's kind of what I'd look into. A lot of sites use a kind of a thumbnail image for videos. That's perfectly fine. We used to have pretty bad XML sitemaps in Search Console, links to no index pages, hreflang with no returning links. And Google eventually stopped fetching our XML sitemaps here. We revamped the whole site, created 66 XML sitemaps, and submitted them with Search Console. It's been four days, and they're still all pending. Is it possible that Google is disregarding XML sitemaps for our domain since they were pretty worthless before? I don't know how you had your sitemap set up before. So it's really hard to say. It's really rare that from our side, we would say we're not going to crawl these sitemaps anymore. That's usually something where it would have to be pretty obnoxious in that there are thousands of sitemap files with totally irrelevant URLs that are just kind of a waste of resources. So since you mentioned 66 sitemap files, that sounds like it's probably not the case. And maybe it's just a matter of giving it more time to kind of settle down. Hey, so that was my question. Hi, John. Cool. Yes, and we used to have, again, 66 sitemaps, but with like 10 billion total links with duplicates. So we used to have HF-length tags in sitemaps too. So with everything included, there were 10 billion URLs, and 90% of them were actually paced with noindex. So now when we are creating our sitemaps, we take regard of the noindex tags. So now we have much more lighter sitemaps, but all of them are indexable. But for the last seven or eight months, Google was actually stopped crowding our sitemaps altogether. And like you said, it was a waste of your resources, so it makes sense for you to not crawl them. But right now, this new version is better. So how can we tell this to Google that this version is good? Yeah. OK. You can send me URL, and I can double check with the team to see if there's anything blocking on our side. All right, so I'll message you on Google+. Yeah, that sounds like a good idea. OK, thank you. Cool. We need to kill thousands of pages in redirect chains from our subdomains. But oh, boy, long question. We set them to 410 and submitted them in sitemaps. The problem is Google is not seeing them as not found and just submitted URL and not selected as canonical. Usually, this is something that catches up over time. So if you're already seeing submitted URL and not selected as canonical, then that means we're not indexing that page, so that there's essentially nothing that we'd need to take out of the index for that situation. So if you're already seeing them as not selected as canonical, then it's a matter of us not having crawled it with that kind of file I found error code. But essentially, we've already processed that, and we don't need to know about that anymore. So that sounds like it should be working as expected and not something that you'd need to force additionally. If I submit a new XML sitemap in Google and see that it has too many errors, I remove this from Search Console. Will Search Console still throw errors for that specific sitemap, or it won't show as long as there are other errors with the sitemap? Usually, what I'd recommend doing there if you submitted a sitemap file that has a lot of errors in it, then I would either try to fix the sitemap file so there's actually a more valid sitemap file, or at least remove the sitemap file from the server. So as long as the sitemap file is still on your server and it's just not listed in Search Console, we could still crawl that and try to process that sitemap file. So if you really want that taken out, I'd recommend removing it from your server directly or fixing the sitemap file. In Search Console, crawl errors, we sometimes see pages as 500 or 410 while the page is opening. Can Search Console have some option like the cache of the time when Googlebot hit the page so that we can share this with the team and tell them whatever Googlebot is hitting this page, it's finding this response code? It should only be for errors or warnings. So what happens with general server errors or 406 errors, 586 errors, is that we don't process the rest of the content of the page. So we don't actually keep track of what is shown on this page to Search Engines or to us, at least. We only keep track when we can pick up the content normally, when it returns a 200. So in a case like this, we wouldn't have any cache that we could show you in Search Console. Let's see what we have here. If the currency contact address is changed in the website, though there are thousands of pages that have duplicate version, will it be considered as different content? In Hreflang, will we consider this, or will we still see it as duplicate content? Not quite sure how you mean. In general, if there is different content on a page, then that's not duplicate content. So if you have different addresses on a page, especially if you're looking at a Contact Best page, like you mentioned there, then that would not be considered duplicate content. That would be just normal kind of content. So that usually wouldn't be something to worry about. If we did see something like this as duplicate content, for example, if you have a large piece of general text on a page and a small address on the bottom, and that general text is the same across multiple pages, and the address is just changing on the bottom, then what would usually happen is we'd still index those pages, and we'd try to pick one of those to show for queries that are more general, and kind of filter out the other versions of the same page. It's not something where we would demote a website for having this kind of duplicate content, because this is really common scenario. We want to kill a few pages. So I'm asking my developers to add both, no index, no follow, and then do a redirect to the closest page. Is this the right approach? Or should I do something else to remove the pages from Google Search Index? So there are two things here. On the one hand, you can't add a meta tag to a page and redirect it, so you have to pick one or the other. On the other hand, if you remove a page, I would just return the normal result code for a removed page, which could be a 410 or 404 error code. And show a user-friendly error page, at least, so that users are able to understand that this content no longer exists. But you definitely don't need to kind of do any fancy combination of meta tag and redirect. Already asked, posted my question on Webmaster Central. So I think this is the question about GDPR. One of the things in this question was that they show an interstitial instead of actual content for users who don't have a cookie. And from our point of view, if you show an interstitial instead of content, then Googlebot won't be able to crawl past that. So Googlebot won't click on any links on this page and kind of try to follow through. Instead, it will see this text that you're providing on this interstitial page, and will try to index that text because that's what you're providing to Googlebot. So that's probably not what you're trying to achieve. And from our point of view, there is no real way to both block the content completely and show all users an interstitial. There, what you could do is perhaps use JavaScript to show an interstitial overlay so that we can still find the full content. That might be an option there. But you can't both block the whole page and expect Googlebot to click through to actually get to content, and expect that Googlebot will be able to do that to index the final content instead of the interstitial content. Also, Googlebot won't keep a cookie. So even if we do click through once, that wouldn't mean that we'd be able to understand that we can serve this cookie and get to the rest of the content again over and over again. Let's see. What factors affect visibility in Google's local news? I don't know. Google News search is kind of separate from web search. So I don't really have any factors that you can look at specifically for Google News. Let's see, lots of questions here. mDot and mobile-first indexing plus hreflang, how do I fix that? If I point my mDot canonicals to itself as a documentation itself, I break the mDot link validation. Yes, with an mDot page and a rel canonical and the hreflang link, the hreflang link needs to be on the mDot page itself as well, and needs to be between mDot versions. So your desktop page points the hreflang at the other desktop pages for that set, and the mobile page points to other mobile pages of that hreflang set. So that's kind of the way that you set that up. You don't have to do anything special with regards to rel canonical or the rel alternate. You'd leave those links as they are. So the canonical would point to the desktop page, the alternate would point to the mobile page. But within the set of mobile pages and within the set of desktop pages, that's where you set up the hreflang. We also have that documented somewhere in our developer site, so I double check there if you need more information on how that might look. We have both the HTTP and HTTPS versions of a page where HTTPS was recently launched as a best approach to redirect all HTTP to HTTPS or can they coexist? What's your suggestion? So if you keep them separate so that they don't redirect, we'll index these as separate pages, which means we might pick up some as HTTP and some as HTTPS, maybe some as both versions depending on how you serve them. And that's probably not ideal. So I'd really recommend double checking our help center guidelines on moving to HTTPS, which includes setting up redirects from all of your HTTP pages to all of the HTTPS pages. That's what I'd recommend for change there. If we do mark this fixed in Search Console, is it also applicable to the new interface? I'm not seeing any change in the new Search Console. No, mark this fixed is only specific to the old interface. I believe it's only in crawl errors, and it's really only a change in the user interface in crawl errors. It doesn't change anything on our side on the back end. It doesn't re-crawl those pages or try them again. If you need to re-crawl pages, I would double check the new Search Console in the index. What is it, the indexing report, where when we spot issues there, there is a way to validate them, which means we'll go off and check a small sample of those pages fairly quickly to see if this problem is fixed. And if it looks like it's fixed, then we'll crawl the rest of the pages that are affected as quickly as possible. So that's what I'd recommend doing there. Does Googlebot recognize the data alt attribute on image tags as well as the alt text? No, we don't. So oftentimes, the data attributes are used for lazy loading or for giving other information for other scripts on images or other aspects. But we don't pick that up for indexing, neither for image search nor for web search. I'm looking at a site that seems to be generating infinite URLs when using URL parameters and scrolling on a blog page. How does Googlebot see this? And what effect does it have on my client's site? How would you recommend to resolve this? Yeah, so if it's really generating an infinite number of URLs, that would mean we'd try to crawl a lot of these URLs and it would probably kind of keep us in a loop and essentially keep us busy with all of these infinite URLs rather than actually focusing on the useful content on your page. So I'd recommend double checking our guidelines for things like infinite scroll to make sure that you're really only providing the URLs that are necessary to have the site crawl, and you're providing them in a reasonable way so that we can pick them up. If you're using URL parameters and those parameters include things like session IDs or kind of individual bookmarks essentially within a bigger piece of content, then you might want to look into the URL parameter handling tool, which lets you tell us that individual parameters are irrelevant for your website. But I'd first kind of double check how you have this implemented and what's actually happening on our side when it comes to crawling. Hi, John, I've got a little question related to the URL parameters. If I have the URL parameters in the site in which the canonical URL will change when the URL parameters change along with it. But if I go into search console in the URL parameter settings, I said a sophisticated URL parameter do not index. And which will search engine to pick up? Is it the canonical URL or the parameter settings in the search console? We see the URL parameter settings more as a kind of a signal for us, so we don't follow them blindly like we would with the robot sex form. But they are a pretty strong signal for us. So if you tell us that one URL has a rel canonical pointer to this particular URL and your URL parameter settings say, don't index any of these parameters, then probably we won't index them. It's possible there might be some kind of interstitial time until we process that. But for the most part, we probably wouldn't index those parameters. So I'd say the URL parameter setting is kind of a signal, but it's probably a stronger signal than the canonical would be. OK, OK, great. All right, what would your recommendation for FAQ pages on fairly low authority sites, having lots of thin content coming off of the main FAQ page versus one large comprehensive page? The latter would be better in terms of one having one authoritative resource, but for the user, we'll need to scroll a lot to find it. What are your thoughts? I think this is something I would totally test. So since you mentioned that the user might have to do a lot of work to get to the content, I would do a normal A-B test for your users to see which one they actually prefer. And it might be that they prefer a really long FAQ. It might be that they prefer having it split into chunks or maybe split into individual pages. I assume if you're putting just a short question and an answer on one particular page individually, then probably that's not really that interesting for users. So if you just have a really short question and answer, I'd at least make sure that you expand in the answer section to provide some more context around what this is, where this question comes from, what other types of questions might be out there that are kind of similar in that regard. But I would definitely test this with users to see what they would prefer. My question for today is in connection with the new GDPR, which comes into power this month, and the Google Analytics settings. Is there any chance you can clarify that the option in Google Analytics, what that means, and what will be changed? I don't have any insight into Google Analytics, so I can't really tell you what will change there. I'd contact the Google Analytics posts in their help form. I believe they have a help form specifically for Analytics and get their input there. Hey, John. Hi. We'll still receive Search Analytics on Search Console for EU sites post GDPR, correct? Yes. That's not tied to individual users. That's something where we have aggregate data that's in Search Console, so that wouldn't be affected from any of these changes. In the index coverage report, I see Google has crawled through the add to cart and removed from favorites, add to wishlist, and all these other links related to an e-commerce site. Search Console automatically excluded these links and listed them as pages with redirects, should I put in no follow on these links and spare Google from going through them? Any other advice to deal with these internal links? For example, my site has 11,000 index pages and 30,000 excluded ones as pages with redirects. It sounds like that's already fine. So if you're already redirecting from these pages to maybe back to the general version, then it sounds like we're already picking that up properly. What you might want to do is look into what these URLs look like and potentially use a URL parameter handling tool to let us know that these pages are not worth indexing or crawling. I wouldn't block them with robots text and, generally speaking, doing kind of page rank sculpting but putting no follow on these links internally generally doesn't have that much of an effect anyway. So in the worst case, I would just leave it like this. It's definitely not going to cause a problem. How do you decide what to pull in for featured snippets? Does Google only pull in info from sites with a high EAT? Not sure what that is. Or is it primarily judged on relevance to the query? We look at a number of factors when it comes to featured snippets. And a big part of that is really relevance to understand what we think makes sense to show to users for individual queries. I realize sometimes we get them wrong. Sometimes we have weird featured snippets. And it's really helpful for us to get examples of these so that we can pass them on to the team. But generally speaking, it's not the case that just because you have an important website that we're automatically including everything that you write as a featured snippet. Part of the pages of our site are not getting crawled. We tried copying the portion of text and searched on Google Search, but it's not showing our page. However, when I checked the cash result, Google is showing a page fine with a text present on the page. Is there any issue with crawling over pages? So I double-checked these links that you provided just before the Hangout, and it looks like we're indexing this page normally. So that text snippet that you link to shows that page actually index normally. One thing that might be happening here is that we have a number of different data centers worldwide. And sometimes, for mostly technical reasons, we don't have all pages in all indexes across all of these data centers. So if you check once, it might be you're checking one data center. If I check here, I might be checking a slightly different data center. And that could theoretically result in a difference in the results that we see. For the most part, this settles down fairly quickly. So I suspect if you check now or if you check tomorrow or so, then probably that page will be indexed just fine on your side as well. We often link to the same handful of sites because in entertainment reporting, there are thousands of outlets that are unreliable, while only three or four are actually trustworthy and have real access to celebrities. Can this repeated reliance on the same three or four legitimate sites falsely be construed as part of a link fund? No, that's perfectly fine. If you have a bunch of authoritative sites that you want to refer users to, that's perfectly fine. It's not something that we would see as something negative. A lot of sites link, for example, to Wikipedia or to other authoritative resources. It might not link to lower quality or random blogs that they also have on the same topic. And that's perfectly fine. That's the way that the web works. We don't use these links in a way that we say, oh, this site is linking to an authoritative resource. Therefore, it must be a good site as well. We kind of evaluate the site on its own without just looking at the links that it has there. So one really common kind of spammy technique from, I don't know, 10, 15 years ago was to link to sites like Wikipedia or Google or CNN. And then to hope that search engines will look at this page and say, oh, there's a bunch of spam here, but they're linking to CNN. So therefore, it must be a legitimate site. And that's something that I think all search engines realize in the meantime that just because it's linking to a good site doesn't make that content good. There's actually a second part of this question, which is, OK, go for it. So the problem sort of in my industry is the more outrageous or untrue story is the more likely that that story is going to accrue links very quickly, because it's easier for sites that don't fact check or unreliable sites to just link. They feel they're also indemnified legally. And of course, it's more interesting to read an outrageous story than it is the truth. The prompt for us historically has been for the last almost decade that we've been in businesses. After we correct the story, it's not that sexy or interesting to link to the correction. Is that a problem for us? Because sometimes we find it a little hard for us to rise to top stories and combat the fake made up story. For the most part, that shouldn't be a problem. So we look at more than just the number of links to a page. We try to understand the quality and the quantity of the links there when we evaluate the links going to a page. So for the most part, I think we would probably get that right. I think it's always kind of a tricky situation to be in, though, just because the nature of these, like you said, outrageous claims where people are like, oh, look at this thing. You won't believe it and then link to it. Whereas if it's like, oh, this is actually not true, nothing crazy is happening here, then nobody really cares. But for the most part, I think we would be getting that right. And we don't take things into account like social signals. So if someone is sharing these things on Facebook or Twitter, then those would be no follow links anyway. And we wouldn't take that into account. So I think, for the most part, at least from the linking side, we should be picking that up properly. I think there's always something to be done with regards to better understanding what a page is about and how it's relevant, which can be tricky as well, especially when you have a topic where essentially there is nothing amazing to say other than, well, I don't know. They're not aliens and living on the moon. It's like they're just normal people. That's really hard for us sometimes to figure out how we should be showing this in the search results when someone is searching for something like, is this person an alien? You're like, well, no, they live here in New Jersey. That's always tricky. But I do know there are a number of people on our side that are looking into this problem to try to figure out how to better catch this situation and how to surface the legitimate or correct sites a little bit better. Well, from our side, within the body of the text, we link to third-party published documentation that shows that the claim is not true. And then beneath each story, we list loads and loads of sources that prove if it says someone is pregnant. Obviously, their doctor is not going to say it publicly, but we have found that there are publicly available ways, in addition to a spokesperson, to prove that that claim is untrue. And we list that underneath each story. That sounds pretty good. Yeah. I know you've been struggling with this. So we do talk to the team about to see what we can do to make that a little bit better. I think it's a tricky niche that you're in, especially because of all of these crazy stories that are happening all the time. But we'll try to see what we can do to make that a little bit easier or reasonable in the search results, I guess. I truly appreciate that. Thank you. Thanks. Hey, John. Yeah. So I have two questions for you. The first one is about the minified JavaScripts and CSS files. I have a project now where when I minify them, it's totally broken. How much important is that for Google? My first question. It depends on what you're doing in the JavaScript. If the JavaScript is important for the page to load, if the content isn't on the page without processing the JavaScript, then, of course, we wouldn't have the content. But if it's just some random JavaScript that you don't really need to view the page, then maybe it doesn't really matter. And my second one is about the external links on our pages. Actually, do they have an impact? Or the internal links are more important? We use both when it comes to search. So we try to understand how the site fits in within the context of the rest of the web and within the context of your website. So is there a kind of signal for Google for relevancy? To some extent, yeah. I mean, it really depends on the site and the queries and things like that. But it is something that we use in some of our algorithms to understand where we should start crawling, to discover new pages, to understand which pages need to be crawled more frequently, those kind of things. So not every time for relevancy, but we can keep it in mind that they could be used for better relevancy. Yeah, I mean, we try to understand which pages are relevant within the context of the rest of the web, like how it's being linked, how it's understood by other sites. Thank you, thank you. Sure. Let's see. Here's a question. Again, with Angular, we have a bunch of product pages that are built with Angular with the hash in the URL, so the number sign that are never indexed in Google. For example, site.com slash category and then hash product one. Will removing the hash from these Angular pages make these pages indexable, or would you recommend using pre-render or some alternative? Yes, you need to remove the hash from the URLs. We essentially drop everything after the hash sign when we see a URL like that in reference within your website. And we'll try to crawl the page without that hash part. So if you want these pages to be indexed individually, you need to make sure that you're using natural URLs to let us know about the pages so that we can crawl them and actually try to index them directly. It's not always necessary to pre-render those pages. And using a pre-render service doesn't necessarily get rid of this hash in the URL. So as a first step, I'd really recommend removing the hash. We did a session on JavaScript sites and search at Google I.O. last week. I'd recommend taking a look at that. That also covers this very briefly. My question is about embedding YouTube videos on our page. Is this going to be an issue with the data protection thing, or even an advanced mode? So I don't know about the advanced mode. And as far as I know, I don't expect this to have any kind of effect otherwise with regards to data protection, because it's a YouTube video. It's not something crazy. But that's it. I don't know the details, actually, there. So I'd recommend double checking with the YouTube forum on this. I believe they have a normal help forum where a lot of the top contributors are active as well. And probably they have a bit more information on that topic in particular. It's more of an issue with regards to what YouTube does and with regards to what search does. From the search point of view, we would look at this page and just see that there's a video embedded. And we would show that in the search results. I'm not sure if the fact that it comes from YouTube or that it's served via YouTube, if that plays a role in any of the kind of data protection guidelines on your side. So I'd double check with the YouTube team instead. We're a university, and we own several different domain names. We have our main website, but we also have websites for athletics, a university store, et cetera. They're all on separate domains as opposed to subdomains, or a main domain. Can Google tell that these domains are related? Could the athletics or store domains affect the SEO of our main domain? For the most part, we can probably figure out that these are related. But I don't think we do anything special in search in that regard. So if someone is searching for your university's athletics department, we'll try to show that site. And that's usually something we'd be able to pick up on directly as well. So regardless of subdomain or separate domain, we should be able to understand that. Usually what happens in practice is that you link between these different sites anyway, that you link to maybe the main university site. From there, you link to the university store, or the athletics departments, or other departments. And that way, we understand that these are all kind of related. And that helps us to better put things in context a bit. And usually, that's enough for us to kind of understand how these belong together. How does Google see near me searches? Does it simply bring results based on location, or is there something that we should be doing to ensure that our sites are appearing when a user completes a search, whether that be by voice or text? So as far as I know, we mostly look at this with regards to location. So doing things like adding, I don't know, pizzeria near me as a title of your pages on your website probably doesn't make that much sense, because we would try to figure out what is actually near the user. So in order for us to be able to figure that out properly, there are two things that you can do. One is, obviously, put your address on your website and to mark that up appropriately with structured data so that we understand where your location is. The other is to set up a Google My Business local listing so that we know where your business is located and can connect that to your website as well. So that's kind of the two things I would look at there. Could you give guidelines on how to make lazy loaded images available for crawling? And also like responsive source set. We have some of that information in the Google IOTalk that I mentioned for JavaScript sites as well. So I double-checked that. In particular, for lazy loading, we recommend using a NoScript tag after your kind of lazy loaded image element and letting us know about it like that or by using schema.org's structured data on the page if that's something that can be marked up with an image directly within the structured data so that we can pick that up. And within the structured data, you can also specify a set of images which we'll be able to look at as a set as well. So that would cover the source set responsive side as well. We need to document this on our side to make it all a little bit easier to find and digest. We'll be doing that, I guess, over the coming weeks or so. Can I follow up on that? I am definitely not the technical guy, but I did watch some of those videos from Google IOT and much was talked about JavaScript. Because I'm not the technical guy, I asked my developers and I said, oh, do we need to migrate our stuff to JavaScript? And I was sort of told, no, you don't really have to. Is that accurate or should I press upon them a little more? No, that's correct. The JavaScript part is particularly with regard to sites that are already built purely on JavaScript and guidelines for how to make that available for search in an optimal way. But if your site isn't built on JavaScript like that, then that's perfectly fine. In particular, some of the newer frameworks and development environments, they focus a lot on JavaScript to improve the speed and improve the development cycle. So that's kind of where we're targeting there. All right, looks like we just have a couple minutes left. So maybe I'll just open up to any questions from your side. What else should we cover? Answered all questions. Oh, there's one in chat. I've seen a lot of answers from people also ask that are localized to the US rather than to my location in the UK. The feedback doesn't seem to be acted upon. Is there a better way to report this? One thing you can do is maybe send me examples so that I can take a look at that to see what's kind of going wrong on our side, how we can handle that better. I suspect with newer search elements, like the people also ask feature, that's something where maybe there are things we can improve, especially with regards to localization, when it comes to things like both pages are in English, but one is for the UK and one is for the US, which one do we actually show, those kind of things. But if you can send me specific queries and screenshots that goes a long way to reaching the team and helping them understand the problem. Cool. Otherwise, I'll grab another one from the list here. One of our affiliate websites is over a one-year-old now and started to grow slowly. We're getting around 100, 150 organic traffic from Google before traffic started to drop from 13 to 14 in December, never recovered fully yet. We tried doing everything on the site, on-page audit to content update tweaking, to links removal, but nothing helped to get a rank back. What should we be doing to get a rank back? I don't think there's a magic solution to getting your ranking back. Sometimes things are not side change with regards to our algorithms. Sometimes things on our side change in the sense that we reevaluate your website. We think maybe it's not as relevant as we thought initially. So especially since you call out it's an affiliate site right in the beginning, I'm kind of worried that this is primarily an affiliate site. And maybe there are lots of other affiliate sites in that area, or there's the original sources where you're an affiliate of, which are also available in that area. And from our point of view, it might not make sense to show all of these different affiliates if they're not providing significant, unique, and compelling value. So that's one thing where I try to take a step back and think about what you could do to kind of really provide more than an affiliate site and rather to provide something that's really fantastic and awesome, that answers people's questions, gives them a lot of value. That's something that they would refer to directly rather than to kind of the target site of whatever you're kind of selling as an affiliate. So that would be my advice there. I realize it would be nice to have a meta tag, because like, give me the rankings from last December, but that's not going to happen. There is no simple technical change that you can do on any site to kind of like say, take me back to the search results from some time in the past when I was doing better. So that's something where you just have to kind of bite the bullet and try to work on the site. All right, looks like we made it to the end. I see there are still a ton of questions that were submitted. So I'll try to double check to make sure that I'm not missing anything critical and maybe add a comment in the Google Plus post if I need to. Otherwise, if there's something on your mind that you'd like to get more information on, I'd recommend going to the Webmaster Help Forum. There are lots of really smart and friendly people there that can help you kind of get started on a problem. Alternately, we're doing more of these Hangouts, so feel free to join in on one of the future Hangouts or drop your questions there. Thanks, everyone, for joining. And I wish you all a great day. Bye, everyone. Thank you, Jim. Bye-bye. Bye.