 All right. Welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a webmaster trends analyst here at Google in Switzerland. And part of what we do are these office hour hangouts with webmasters and publishers and SEOs of all kinds. As always, a bunch of questions were submitted already. But if any of you want to get started with the first question, there's your chance to jump on in now. Regarding the Knowledge Graph, basically the new claiming part. So if you don't have a Wikipedia page or you're not notable like Larry Page, then you can't go ahead and claim your listing, or who you are. If you don't have a schema markup, like if you just go and type your name and just want to try to claim it, if you're not notable, you can't claim your name. Is that correct? I think that might be the case. I haven't been paying attention to the details there, so I don't know. Sounds like that might be the case. If you were a good SEO, you could make sure you show up there. Yeah, but I mean, free base is gone. So free base is closed. You can't do that no longer. So it's not like they're advertising it and saying, hey, you cannot do that. But I'm just joking. Yeah, I don't know. Maybe that'll change over time. I think it's useful to make it possible that people can edit these things if it's really about them. But I don't know what the requirements will be. But at the same time, I understand, though. You're not looking forward for 3 billion government IDs standing with a selfie. Probably, yeah. I don't know how that will be set up in the future. But it makes sense. I understand now. Cool. Yeah. I have a question about what you announced this morning, the indexing API. OK. So very cool job posting URLs is a really nice way to start with that, obviously, like you guys described. Will you guys be releasing it for more areas? Not just job posting URLs? I don't know. I don't know how far we can take that. I think, especially for jobs, it's something where we've noticed this is a real problem. Not just getting stuff indexed, but also getting stuff out fairly quickly. But I don't know if that's something that will be available for other types of content as well. If you look at the website, it was worded in a way that sounds like it will be released for other areas, currently only for a job. You have no insight, so I just don't want to say anything. We don't like to pronounce things. Yeah. I mean, it's something where we need to see how it works out for us. And this is different than what was announced two years ago at the Google IO thing around real time indexing API, which kind of never really was totally different. I think it's a totally new API. OK. Thank you. Hello. I'm Jordy from Wikiloc, and I have a question. Sure. Regarding the correct use for SEO of Google translations in a web page, I wrote it to the poll yesterday. I will read. Our website supports 22 languages and has a subdomain for each language. The navigation elements are all translated in the corresponding languages, but not most of the text content is written by users and is in one language only, the native language of the user who wrote it. The problem we have is that, for instance, a German visitor might land on a page where the navigation is in German, but the main content, the user content, might be in Spanish because it's written by a Spanish user. To improve the content accessibility, we would like to show each visitor the content in their own native languages. And we plan to use Google Cloud Translation API to translate these texts to help the users understand something. If we decide to show the translated version of the text automatically in its own subdomain, for example, German text in the DE, so will this penalize our SEO, it is better instead to show always the content in the original language and let the visitor click a button to request a translated vision. The translated text will have the corresponding meta tags to show the crawlers that it's selected by Google. And also, we have a link rail alternate hreflang in the header to show the different versions, including the one subdomain that have the original content. But we already really don't know if it's correct to show automatically when the user enters the translated version, or it is better to give the user the option. I want to know your idea about that. So for us, the problem is if you let us crawl and index automatically translated content, then that quickly looks like auto-generated content, low-quality content. So what we would recommend doing is letting Googlebot index the original language content with maybe the menu and things like that in German. And for users, then using something like JavaScript or something to show the translated version as well. So kind of like the original content and then the translated version for users, but prevent the translated version from being indexed by Google. OK. This means that it's OK to show the translated version automatically if it's by JavaScript. Sure. Yeah. OK. If the user, when they click on a search result, they see the content that they were looking for, that's perfectly fine. We just want to make sure that we're not indexing any automatically-generated content, especially low-quality, automatically-generated content like from an automatic translation. I know these systems are getting better and better, so maybe we'll have to rethink this. But it's still the case that if you have automatically translated content, it's not really that fantastic. As you say, the point is that we really don't want to use that for better SEO. We just want to make the option to the users to understand more things without getting any kind of penalty. Thank you. We will use that. We will always show in the HTML the original content and from with JavaScript marking that it's automatically translated to the content. Thank you. Cool. Fantastic. All right. Let me run through some of the questions that were submitted, and we'll definitely have more time for other questions along the way as well. Is there any demotion or penalty for having an article with an H1 and a title that says 33 great online business ideas to make you rich? And then putting it on a URL that says the same thing, would the algorithms look at this like you're trying to gain the algorithms or something? From our point of view, that's perfectly fine. It's not the case that we would automatically promote this site to number one rankings just because it also has the same keywords in the URL as it has on the page itself. But if you think this makes it easy for your users to access your content, feel free to go ahead. So that's something where we don't feel strongly either way. But you guys basically would just take it and then ignore either the H1 or just choose the URL path or whatever, right? John, the algorithm would do whichever. I think it depends on what all you're doing there. At some point, if you go down this path, then it's really easy to get into a keyword stuffing situation where you're essentially just repeating the keywords over and over again in the hope that this makes it even more popular for Google and other search engines. And search engines have learned about that over the past, I don't know, 20 years or so that they've been active. And they ignore most of this keyword stuffing already. So it's something where you could do it, but it's not going to help your site. So it's not doing that. That's why I mean, there's certain plugins that I'm still upset that they're still giving the option to put in the keywords. I mean, there's still plugins out there that still will allow you to enter keywords. But we talked about this like four years ago in a Hangout where just ignore it. Yeah. I think it's good to be direct in your content. And if you see people searching for something specifically, then why not mention that within your content, but don't over do it? Clients that are still obsessed with the way back stuff, what I do is an HTML, if it's still an HTML site, I'll just leave it in quotes. And I leave the keywords, but I still leave it in quotes just so it's there. And I say, look, it's not picking it up. It's no longer picking it up. Sorry. All right. Next one is about canonical URLs. The setting for the canonical and the URL parameter crawl representative URL is set correctly. Why is Google still indexing the wrong version of my URL? So this is something that comes up again and again. And I think it's important to realize that when it comes to canonicals, on the one hand, we try to figure out which URLs belong together. So if we can recognize that these URLs all show the same content, then we can treat them as one group. And then within that group, we have to figure out which URL is actually the right one to show to users. And we use a number of signals to figure that out. That includes the URL canonical on the pages. It includes redirects, if you have any, on these pages, internal linking within your website, the URL parameter handling tools, site maps. All of these things kind of come together. And the more we can clearly understand that this is really the URL that you want to have indexed out of this group of URLs that shows the same content, then the more likely we will actually go down that route and choose that URL and show that URL in the search results. So that's something where if you're not seeing your URL canonical URL index, then probably that usually means that there is some conflicting signals that you're sending us otherwise within your website. Or maybe you're linking to a different version, or you have a different version of a site map file, or with hreflang links, or some other connection within your website that isn't making it perfectly clear to us which of these versions you want to have chosen. And the other thing to keep in mind here is that when we pick a canonical URL, it does not affect the ranking. So we take all of these different URLs that we know about for the same piece of content. And we essentially just pick one of the URLs to show in the search results. And the ranking is the same regardless of which URL that we show. So it's not the matter that you need to kind of tell us exactly which one it is, and we'll rank it properly. It's more a matter of what you have a preference that you prefer to have this one shown. And the clearer you can make your preference known, the more likely we'll actually choose that. But from a ranking point of view, they should essentially be the same. I'd like to know how we can assure that e-commerce sites that index internal search pages don't get hammered, considering that there are many problems to be taken care of, like internal linking and no indexing URLs, I guess, and canonicalizing with parameters. What should we do with e-commerce pages that have internal search pages? So first of all, I think this is something that is very specific to individual websites. I don't think there is a one simple clear answer that applies to all e-commerce sites where you can always say, they need to do exactly this, and it'll be perfect in search. There are two main reasons why we sometimes pick up these internal search pages. Sometimes these internal search pages are really useful as a category page. Some e-commerce sites are set up that way already, where if you look at, I don't know, shoes, if you search for shoes and you end up on a category page on this topic, which lists all of the different items, which gives us a lot more context for the individual items that are listed there. The other aspect that's really important for us is that we can crawl to the individual linked pages from there. So again, if you have a shoe store and you have individual shoes for sale, then we need to be able to find all of those individual shoes by crawling your website, which sometimes goes through these kind of category pages or search pages. So those are kind of the two main reasons why we sometimes use these. And depending on how you have your shop set up, how you have your website set up, we might want to have some of these pages indexed, or we might not want to have some of these pages indexed. If we can crawl your site already using your normal navigation, then maybe we don't need to have those search results pages indexed at all. Whereas if the search results pages are essentially a type of category or navigational element, then maybe we need to have some amount of these search results pages indexed. And figuring out what amount is worth indexing is something you need to figure out for your site on your own. So some people like to just have the first page indexed. Some people like to say, well, if there are fewer than a handful of items, I'll just no index that page and hope that these items are also linked on other parts of the website. These are all options that you kind of have there. Some people also just say, well, crawl them all. And if Google decides to crawl too far and index too far, then we still have those pages indexed. It's not like the end of the world. And depending on your server, how many pages you have that are linked like that, that might be fine as well. Where we can crawl all of these, we don't overload your server by crawling. And indexing is a little bit messy, but it kind of works too. So it's really, I'd recommend taking a step back and thinking about which of these pages are really important for your site and why are they important? Are they important landing pages? Are they important pages kind of for the internal linking of your website? And if neither of these are relevant, why do you actually need to have them indexed? Lots of e-commerce sites have up-foo skate mega-menu links on internal pages to lower a number of links on that page and distribute more popularity to other pages. Users have a normal navigation with all links, but Google cache shows empty menus. Do you penalize this way of doing menus? I don't exactly know what you're referring to. So it's hard for me to say exactly what is happening here. Sometimes what happens is that sites use JavaScript for some of these mega-menus to make it a little bit more interactive, to limit the amount of HTML that needs to be sent to the user for every page load. And in some cases, depending on how the JavaScript is set up, the cache page doesn't process that JavaScript. Because the cache page primarily uses the HTML page that was sent to Google when we crawled. And if the JavaScript within that page can't run because maybe it's like across the main situation, the JavaScript isn't on the page, it's on the site, and there are some restrictions with regards to how JavaScript can run, then in cases like that, the cache page would not show all of those mega-menus. And that's perfectly fine. That doesn't mean that we're not indexing these mega-menus. But that might be what you're seeing here. With regards to sculpting the mega-menus to pass page rank in a way that you prefer to have done, for the most part, that's something that's probably not worth the effort in the sense that it helps us to understand the context of individual pages. If you're linking across similar product pages, if individual items are linked within similar sections on the website, then all of that helps us a little bit. So I think the value that you might get out of doing things like page rank sculpting are usually offset by us not having a full understanding of the full context of the website so we don't know how to rank those pages properly. So for the most part, I think your time is better spent focusing on the website itself and improving the quality of the site rather than trying to sculpt the links from individual products to other individual products within your website. What happens when we use text on our page with display none? It won't show to Google and users. Will it have an effect in Google Search? For example, if I have some paragraph with style display none, will Google Search show it to us if we search for that text? So it really depends on how relevant those pages are with regards to Google Search. So in general, when we see that a piece of text is not visible by default, then we'll try to not give it as much weight in the search results. But if someone is explicitly looking for that piece of text and especially if that's a unique piece of text, like in this example where you have Alpha No as an anchor text, which seems like something fairly unique, then if someone searches for that and this is one of the few pages that it actually has this text on it, even if it's not shown by default, then probably we'll still show that in the search results. So that's something where there's no lack of a might where we'd say if it's not visible by default, then we won't show it at all. We won't give it as much weight, which means there might be other pages that rank higher. But we generally still show that. With mobile first indexing, we're changing that a little bit in the sense that we understand on mobile, you can't always show everything by default. Often you have these sliders or expanding things that essentially make it so that you can use a site more on a mobile device. And because of that, we're kind of going more into the direction of, well, if it's not visible by default, but you can get there on a mobile phone, that's OK too. And that's not something that we would treat differently in search. The website has two languages in the same URL with the integration of hreflang and canonicals. Does poor content in one language affect the indexing and positioning of the other language with really good content? So I'm not quite sure how this situation is on your website. If it's the case that you have one URL and you change the content depending on what language you think the user wants to see, then that's a bit tricky because then when Googlebot sees that page, probably it will only see the English version of the page because we're crawling from the US. So if you have an alternate version maybe for users in France, then probably we would never see that. On the other hand, if you have the two language contents side by side on the same page, that's something where we would probably pick up both of those language versions. But it's hard for us to recognize what the primary language of that page might be. So again, if you have English and maybe French content on the same page, maybe you have side by side translations. And then if someone is searching in English, it's hard for us to be really sure that this is actually a good English result for them because we see there's a lot of French stuff on here too. And similarly for a French user, if they're searching, we see the English and the French content. And we see this is actually a good match for the French query, but it has a lot of English content here too, is that actually a good result? So for the most part, we recommend making individual pages for individual languages so that if a user in English searches will have a clear English page that we can show them, or if a user in French searches will have a clear French page that we can show them. So as clear as you can make it, the more likely we'll actually follow your lead and show that properly. There's one situation where I've seen where it gets a bit confusing for our systems, and that's primarily around like travel content or vacation content where maybe you have an English language website, but you're writing about, I don't know, Spanish villages or something like that, where all of the city names are in Spanish and the house names are in Spanish, hotel names are in Spanish. Then for our systems, when we look at that, we see, well, there's a lot of English content here, but there also seems to be a bit of Spanish content here too. So how do we treat this page? And usually what happens in those situations is we just show the translate link in the search results. So essentially, our systems are not really sure what language this is in, but if you want, you can have a translated version of this page, and we'll link to, I think, Google Translate as a translated version of that landing page. From our point of view, that's not necessarily bad. It's just something to be aware of, especially if you're writing content about different places that have different names and otherwise words in a different language on the page. I have a US and a UK version of the site. In Search Console, I can't see the implementation of my hreflang tags on the US version, but I can on the UK version. Is there reasons information isn't populating in Search Console? I don't know. I'd love to take a look if you can send me the URL. I can kind of take a look at what data we would show in Search Console for a situation like this and see if there's something that we can do to make this a little bit clearer. Generally speaking, if these URLs are indexed individually, if we can really tell that these are unique URLs, one is for the US and one is for the UK, then we should have information about that in both of those sites. The one situation where I could imagine that it gets tricky is if you have exactly the same content on both of these pages, then it might be that our systems look at this and say, well, this is exactly the same content. We can kind of save some time and save you the trouble and just index it in one version. And in a case like that, probably we will just show the hreflang information in one of these versions. But if you really have unique content for these individual locations and we index those pages individually, then we should have information there in Search Console. One way you can test this, now just recently released in Search Console, is an InSpec URL tool where you can enter a URL from your US version and your UK version. And you can double check to see, are we indexing that URL individually? Or are we indexing a different URL as a canonical for that one? So that's a really cool tool. I'd recommend checking that out, especially in a situation where you have same language content for different countries and you're not sure if it's actually being picked up individually. We're still having problems with our website with regards to Safe Search. We submitted a request over three weeks ago and we're wondering, should we submit again? Is something stuck? So in general, these kind of changes with regards to Safe Search, they can take quite a bit of time. So I suggest this is more a matter of a month or two rather than a matter of a week or two because we essentially need to reprocess that on our end and that takes quite a bit of time. So it's not something that happens all the time. So our systems aren't really tuned to kind of processing these kind of requests really quickly. It's something that essentially takes quite a bit of time even after it's submitted on our side. And the last I checked on this, I think, after you mentioned this in the last hangout, it looked like things were being set up properly on our side and it's really just a matter of time until that settles out and is visible in the search results as well. I'm working on a website. Search Console shows more than 47,000 pages index, but Google shows only around 300 pages. Can you help me to find the possible reasons for this? There are no manual actions. So I assume with you saying Google shows 300 pages index, you're doing something like a site query for your website. And in general, a site query is a very simplified approximation of what we think is relevant to show from your website. And does not mean that we would show like a complete number of all the URLs that we have indexed from your website. So I would not use a site query results for any kind of diagnostics purposes within your website. So that's kind of the first thing to watch out for. Don't use a site query for that. The other thing is something that might be relevant to your site, might be relevant to other sites as well. It looks like you have a property site. And one of the things that I've noticed with property sites is often that there's a lot of duplicated content on there. So you might have different listing pages for individual types of properties or individual locations. And sometimes there's a lot of overlap there, which means it's possible for us to go off and index a ton of different URLs from your website. But actually, there only is a certain handful that is actually worthwhile to index for the site itself. So that's something where sometimes the index count that we show in Search Console is this high count that we've seen from your website. But that doesn't necessarily mean that this is the relevant count that you need to focus on. So in particular, if you're running this website, you can probably double check your database to see how many properties do you really have and how many pages would that kind of translate to? If you have, I don't know, let's say 2,000 properties and you have 47,000 pages indexed and something seems off there, it seems like at least there's some, I don't know, 20 times the number of URLs that you have actual items on your website that seems a bit too much. So that might be something worth double checking as well. So I wouldn't necessarily assume that a higher number of index pages automatically means more traffic to your website. Sometimes it's worthwhile to have fewer pages indexed and to know that these are really good pages, that these pages are really relevant in Search and that they actually have a chance of ranking against kind of a competitive environment. So that's kind of the direction I would head there. Don't just focus on the numbers, really think about what is actually important for your site to be indexed. Regarding indexing, when are you getting rid of the desktop and mobile smartphone when you fetch? When is that leaving? As soon as the mobile, as soon as everybody converted, like to mobile or? To mobile-first indexing. I suspect that will take still a bit of time. So we've started moving a lot of sites over to mobile-first indexing, but there's a lot left to do. And in particular, there are lots of sites that aren't ready yet. So we want to kind of figure out more of a long-term plan with regards to what it takes to actually get these sites moved over, how we can give them information on things that they might need to do to make their sites more available for mobile-first indexing, all of these things. And that's something that I expect will probably still take years. I don't see that happening any time soon. Yeah, it's not like all of a sudden, the whole world can convert. Yeah, it's just that when you fetch, do you fetch desktop and mobile at the same time? I usually do. I don't know. It's just so both. You mean like in Search Console? Yeah, in Search Console, when you fetch, you can fetch desktop, and then there's an option to fetch mobile smartphones. So I mean, I know that desktop fetches smartphone all at the same time, but there's two of them, so I just submit to both. With regards to fetching and checking, I think you can check separately with regards to submitting to indexing. You only need to submit one of them, and then we'll pick both of them up. But with my recent cases, there was one which was really crazy. So I had to, yeah. Yeah. I think it's good to have the tools available to check both of the versions, because it's not going to go away that quickly. And even if we have 90% of the sites moved to mobile first indexing, you still want to double check what the desktop site might look like to something like Googlebot. Exactly. OK. When companies that are in the same industry and aren't competitors blog with each other in order to share relevant, helpful information, is that frowned upon? What if the links are included in order to give proper credit? In general, if you have natural links from one site to another, that's perfectly fine. That's not something I'd really worry about there. My worry is kind of with regards to the way the question is phrased in that you're explicitly saying, we're sharing relevant, helpful information between us two, that almost sounds more like you're just doing link exchange between different sites, which would be something that the web spam team might not like to see, because it's essentially un-natural. But if you're linking to another site and it's a part of your normal content, and this is something that's like you're linking to this site because they're relevant for this specific thing that you're writing about, that's something I would not worry about at all. It's really more the situation where you have kind of like a systematic linking between different sites that you have organized. That's kind of what I'd worry about. So how do they determine? I mean, they continue to have this consistent pattern of a site. So if a site consistency, like if they're very consistent with that pattern, I mean, they just take action and like. Usually these things are pretty obvious. So it's not the case that you run across a random blog post and think, oh, I wonder if there's something sneaky happening in the back here. It's usually the case you look at one site and you see, well, everything is like this. Then those patterns kind of align and make it a little bit more obvious. But there are sometimes tricky situations that we don't get right. That we don't get right algorithmically and that's why we do this on a manual way and that the manual website team does take a look at these sometimes. And sometimes when they get them wrong and we get a reconsideration request back saying, hey, this is like completely natural. There's nothing crazy happening here. Someone else might take a look at that and say, oh, yeah, you're right. We shouldn't have kind of taken action on this and they'll revert that manual action as well. What do you do about acquisition sites? Sites that have been acquired. One famous one out there that owns 10 of them like literally owns everything. And then they link from one place to another, one place to another in a one place. I mean, do you just ignore that? I mean, is it fair? Like do you ignore what do you do in that case? And we're talking there's I think one I know that has like 50 of them, like in the technology world. That happens. It really depends on the situation where the website team might take a look at that and say, well, actually this is more like a doorway page in that you have different ways to lead to exactly the same product. Then that might be something that we'd say, well, this is more of a doorway page situation. But it really depends on the individual products where maybe these are completely separate products. Maybe they're completely separate audiences that they're talking about. They just happen to be owned by the same person then like, well, they're essentially completely separate websites. So that's, that might be perfectly fine. If you want, I can still send you an example. I just thought maybe you're fed up so I stopped for a little bit. You know, I mean, getting these examples is always useful because it kind of triggers discussions internally as well where we can look at that and say, well, are we doing the right thing here? Or are we being too harsh? Are we being too easy on these sites? Do we need to do something to provide more diversity in the search results here? These kind of discussions are always really useful. Sometimes what happens is we look at these and we say, well, this is not really a great way of doing search results. But essentially, there's such a small group of people that actually look at this that we assume that maybe they'll just go to the second page of the search results and continue looking for content there, and that might be fine as well. Whereas if something is really visible and really causing problems for a lot and a lot of users, then the search quality team generally kind of tries to resolve those issues a little bit faster just because it's really something that's affecting a lot of users. OK, I'll send you whenever I see something. Sure, that's fantastic. All right, does Google treat links separately, which is visible on default load and those links that are only visible after clicking on JavaScript pagination links? In both of these situations, it's Google pass page rank. So I'm not completely sure which scenarios you're looking at there, so it's hard for me to say there. In general, if a link is generated with JavaScript and not there with the HTML, then we need to render that page to see that link. And we do render pretty much every page nowadays. Sometimes it's just a matter of time until we get to the rendering part. So it might be that there's a little bit of an offset time-wise there. But if the link is visible on the page after rendering with JavaScript or in the static HTML, then we can follow that. And we can pass any signals that we have through those links like any other normal link. So just because it's there because of JavaScript or because it's there because of static HTML, from our point of view, at the end, it's a link on this page, and we'll treat them the same. If you don't want to pass any signals, you can put a rel nofollow on those links. You can do that in the static HTML or with the JavaScript. It's the tricky part there is that you should not change these kind of things with JavaScript. So for example, if your link is in the static HTML without a rel nofollow and then JavaScript adds a rel nofollow afterwards, then because these are processed time-wise slightly offset, it might be that we process the page first without the rel nofollow. And we forward all of the signals through to those links. And then afterwards, when you add the nofollow with JavaScript, it's like it's already done. So essentially what you need to make sure is that if you're doing something with JavaScript, it shouldn't be conflicting to what you have in the static HTML. And another extreme case here is if you add a noindex to a page or if you have a noindex on a page and you remove it with JavaScript, and that would be a big kind of conflicting change on this page that you're doing with JavaScript. And we can't really guarantee which of these versions will be the one that kind of overrides the other one because it's conflicting. You're telling us this page should be indexed. And then after we process it, the JavaScript you're saying, well, it shouldn't be indexed. Make up your mind, which one do you want? And maybe we'll index it initially and we'll drop it later. Or it's like, which one of these should we treat as the source of truth for your website? So that's kind of what I would watch out there when it comes to JavaScript and changing things on a page. But if you're just adding links to a page, if you're adding content to a page with JavaScript, and these are normal links, that's all perfectly fine. In which condition does Google show a title as a snippet as well? It's not taking a snippet from the page content nor from the meta description. Does that mean the content and description are not relevant to the query? Sometimes that can be the case. So we do try to adjust the snippet and the title based on the query that the user gave. We try to make it so that both of these kind of explain a bit what this page is about to users so that they know what they're getting into and how it kind of pertains to the query that they gave initially. So that's something where we try to take the title and description that you have on your page if we can use that. But if we can't use that, maybe we'll take something from the content on the page. Maybe the title itself is kind of what's relevant there. Sometimes it's also the case that the HTML is broken on the page that we can't clearly extract the title. And we don't know which part is title and which part is the body of the page because the HTML is kind of broken. So that might be something to double-check there, too. Is it still good advice to say that AMP pages don't need a separate sitemap file with new search console and initiative on data that we can provide? I believe that's still the case because especially if you have separate AMP pages, then you'll have the link rel AMP HTML to your AMP pages. From the AMP pages back, the link canonical to the traditional web page, then you don't really need a separate sitemap file. We'll update the AMP cache when we see an update of the normal HTML page. We'll also update the AMP cache separately as we see your site being shown in the search results. So you wouldn't necessarily need a separate sitemap file there. If you have separate AMP pages which are AMP-only, then obviously doing a sitemap file makes a lot of sense because these are the only versions of the page that you have. And we wouldn't have an HTML version that we could otherwise use as a basis for your AMP version. So if you have this kind of traditional connected AMP setup with AMP pages connected to traditional web pages, definitely use a sitemap file for your traditional web pages. If it's AMP-only, then those are the URLs that you have. So you put them in the sitemap file. If you have a new e-commerce site that sells office supplies with getting customers to leave reviews, be enough to make it distinct from other alternatives, or then do they need to go above and beyond somehow? So in general, if you're offering something that's kind of like a commodity that everyone has, then you really need to find some angle to make your site unique and compelling. And maybe that angle is reviews that you have on your website, which are useful and which provide additional information. Maybe that's not it. So especially if you're selling office supplies, I don't know how useful a review for, I don't know, paper or pens or something like that can be. Maybe if you have really unique and cool office supplies that would help. But otherwise, it feels like a very small step into making things a bit more unique and compelling. I assume from an indexing point of view, we'll index these pages perfectly fine. And if someone explicitly searches for them, we'll definitely show them in the search results because we have them indexed. But if you're competing against existing sites that have been working on this for years and years, then you really need to make sure that you have something that can be at the same level. Reinvent office supply. I don't know. I mean, it's always hard when it comes to these commodity things. But I'm sure someone who's really creative could figure out ways to offer office supplies even in a way that is unique and compelling in a way that people would explicitly search for that site and say, I want to get my paper from this specific website because I know what they're doing is really cool or I really like what they're doing otherwise or the office supplies are kind of unique. I don't know. Finding that right angle, I think, is important for any business that you have online or offline. Like really making sure that they're differentiated. If it's a news site, is that the same kind of problem then? I can barely hear you. Can you repeat that? Yes. Is this any better? Yes. If it's a news site, is that the same kind of problem? I mean, if you're going to cover, I know you talked about this last time, how to unlock an Android phone, which has probably been discussed a million times, is that the same kind of problem or is that less of an issue? Because it's not really geared around something specific. So if you say, how to unlock an Android phone blindfolded, I don't know. But have a unique angle on it. Would that help that news site? I think anything that you can do to give your site a unique angle that is something that users would actually want to see, that can help there. So that could be something that you're doing like blindfolded or you do all of your tech reviews while dancing or something crazy where people say, well, this is actually fun to watch and informative. So I'll actually go there explicitly. Anything like that helps us to understand that this is actually something unique and not like everything else. And finding that angle is really hard. And it's still something where you're kind of competing with all of these existing competitors who might have been doing this for a really long time and maybe put a lot of money and a lot of time and effort into actually getting where they are. So ranking and competing against those is always going to be really hard. I'd love to give an example. Sure. So Dollar Shave Club is an example and how they kind of made the razors companies look like. I mean, it just walked in there. It wasn't an overnight success, but yeah. So if you take a look at that and how you reinvent a lot of Tony Robbins out there, but can you be the next Tony Robbins, I don't know, in a different way? So I think a large part of that is also not specific to SEO. It's essentially marketing and kind of positioning your business in a unique way. And that's kind of an angle that you need to keep in mind whenever you're doing anything like this. Dave, I've probably also had a lot of money. No, not really. I'm really starting off with money. I mean, today you can go rent a camera. Like that used to cost $5,000. You can rent a camera today, like a red camera today, for like $100 a day. And you can create almost like a movie. I mean, it's a lot of hard work. And the sites that are in some of these competitive niches, they didn't get there just by waiting around. They spent a lot of money and a lot of time on these as well. So kind of getting in there and pushing them away, that's not that easy. So that's also one of the reasons why usually I recommend, if you're getting started online, that you focus more on something that's kind of unique to your site and to your business so that you don't have to compete with all of these other sites. Just like Google started on the garage. The rent was expensive, right? Yeah, I mean, you have to find a unique angle that you can focus on. And that's something where maybe it makes sense to start with a small niche and say, well, I'll be really good here and kind of build out from there. Whereas if you start now and you say, oh, I think I want to compete with Amazon or, I don't know, Alibaba, then you're going to have a really hard time. That's going to be really tough. In other words, if you make it, they'll try to buy you out. So I mean, that might be perfectly fine, too. If you're happy with that and it works out for you, why not? But it's not easy if you keep going into a market that has a lot of really good stuff. When you say niche, obviously, no one wants to compete head to head with Amazon or Alibaba. So maybe the niche is, your sites is focused on the world's best pencils. Maybe that's still too broad. I mean, these are not really SEO problems then. It's something where you need to kind of put your mind around where you think people are seeing issues that you can solve in a unique way. And sometimes these are things that you can come across like if you do brainstorming session together, some of them you don't come across that quickly or you have to have, I don't know, the right frame of mind to just accidentally stumble into this thing and they're like, oh, wow, everyone is, I don't know, buying glow-in-the-dark pencils now. I'll provide radioactive pencils. It's probably a bad idea at the moment, but you get an idea. All right, let me double check to see what other questions we have left, almost out of time. There's some things about back links. If there's a link and it later gets removed, is that bad? Essentially, we reprocess those pages over time and we see the link gone, then that's a link that's no longer there. If a page returns E-tag or less modified headers, well, Google send those in future requests and use a 304 sometimes. So we sometimes do if modified sense requests, but we don't always do them because we know some sites get them wrong. So it does help us to reduce the amount of bandwidth that your server kind of needs to provide, but it's not something we would always do. We're moving a large site to HTTPS, anything we need to watch out for. So we have a lot of items on our checklist when moving to HTTPS. I double check that. There are also some really good third party checklists on moving to HTTPS, which have more things to watch out for. So I go through that, especially if it's a large site. I try to do it as systematic as possible and really document all of the steps that you're doing so that afterwards, if you see something quirky in some part of the site, you can trace back to why might that be happening. Is there any plan to bring the job search API into general search? At the moment, we don't have any plans for that. I assume this is something that we might reconsider over time when we see how people are actually using this API. Let's see. What do you think about canonical tags for images for only having the best size indexed? We don't support the rel canonical for images at the moment, so that wouldn't necessarily help. We're currently discussing what we can do with source set attributes within the image element or within picture elements when you have responsive images that have different resolution files available. It's kind of tricky because on the one hand, I understand that people like to get all of their images indexed. On the other hand, for image search, if people are explicitly looking for a specific size image from on a specific topic, then probably they're not coming to your website to convert in whatever way that you want them to convert. Probably they're just looking for an image. I don't know how useful that will be for a website in general. Probably from what I've seen when it comes to image search, what makes sense for a lot of websites is really to provide images in a way that helps users to find your content and not to help users to take your images as the content. So that's something worth thinking about there as well. Are you providing these images for users to reuse? Or are you using these images as a way for users to understand your content better? So for example, if you're looking for cycling gloves, you might use image search to refine what type of cycling glove that you want. But for that, you don't need to have a certain image size. It's like any image of a cycling glove will help you to understand, is this page actually the type of glove that I want to buy? All right, I think we have like two minutes left. Are there any questions from your side? Anything left on your end that you'd like to cover? Yeah, just a very important thing. I mean, people that still don't have the HTTPS, so the move is in July, right? As soon as they enter Chrome, boom. The thing will just kind of give them that warning. I'm still seeing all the rights without HTTPS. That's very, very sad. I wish I could just change everything. I don't have time. But yeah, you know? Yeah. So I mean, how is this going to affect SEO now? I mean, besides the security area, I mean, are you guys going to apply that? Because I remember two years ago, we talked about speed, and you're like, speed? No, I don't think that's going to be applied into the ranking now it is. So the SSL, I guess, is that going to? I don't know of any plans at the moment to boost that in the search results. But when it's visible in the Chrome address bar, I think that will encourage more and more sites to move over anyway. OK. So I mean, it's not going to display the site, though, right? It's going to say that the site is not secure. No, I think it just has not secure next to the URL on top. So it's not an interstitial or blocked or anything. Like the orange, the little orange thing, yeah. Yeah, yeah. But I think it will encourage some people to move over. But I'm sure there'll be enough that say, well, I don't have time to do this now, kind of like my blog or things like that. All right, so let's take a break here. I need to head out. Thank you all for joining. I have the next hangout set up for Thursday in German and Friday in English. So hopefully, I'll see some of you all there. All right, bye, everyone. See you again. Bye, thank you. Thank you.