 OK. Welcome, everyone, to today's Google Webmaster Central Office Hours Hangout. Today, I have a bunch of great guests from Zurich and then nearby who are joining us live in Zurich, as well as a bunch of people who are live here in the Hangout as well. So thank you all for joining in. Maybe I can just do a quick round of introductions here so that everyone knows who everyone is. My name is Chris and I work at eBay. My name is Marco, I work for my old SEO company. I'm Daniel. I work for LOKRONA.io. My name is Veronica. I'm working for Ringier, I guess, as a publishing house, Newspaper Bleak. My name is Jenny. I work at Marketplace for Moving and Cleaning. Hi, my name is Toby. I also work for Ringier, which is a publishing company, and I do new SEO there. Awesome. So it's nice to have so many people here. And we realize that everyone here speaks German, so we thought we would just do the whole Hangout in Swiss German now. Is that OK with you? No, just kidding. OK. Do any of you who are live here in the Hangout want to do a quick introduction? No. OK. That's fine. That's fine too. Hi, John. Hi. Hi, my name is Sanjitha, and I am from India. Oh, great. That's a bit further away. That's nice. Hey, I saw something as well. We watched our publishing before, earlier, like three days ago, I joined the Loz Hangout and then, as well, SEO specialist and I built an agency. OK. Sound OK on my side? Yes. OK, I'm Rado. I was that close to join you guys on location, but there is always the next time. I'm dialing from Vienna. From Vienna. Kind of close, but not right around the corner. Still the same neck of the woods. Yeah, same continent. All right, so I don't know, where should we start? Do any of you want to start with the question? We talked about mobile first indexing. I think that would be quite interesting to know when that slides to be rolled out, how we might spot it in our server logs, maybe, that kind of thing. I think that's anything tomorrow. OK. Yeah, mobile first indexing. I think that's a fascinating topic. There are lots of things that are happening there, mostly with regards to testing that we're doing at the moment. So it's something where we're doing a lot of tests to make sure that when we switch over, we don't cause any problems, ideally. For a lot of sites, it is unproblematic. So we plan to shift more and more sites over, as we can tell that they're OK. And especially if it's a responsive site, then you don't really notice any big difference, because we just crawl with the mobile user agent, and the responsive site has the same content, so we index the same content. So that's how I think you would probably recognize it in the log files. If you look now, probably something like 80% of the crawling is with the Googlebot desktop, and maybe 20% is with mobile, with the smartphone Googlebot, and probably that will shift over in that most of the crawling will be done with the smartphone Googlebot and less crawling with the desktop Googlebot. So if you really watch out for your log files, probably you can notice that fairly obviously. And the plan is to have a little bit more information before end of the year, maybe a blog post, some of the changes that could be done. It's more trickier with websites that use separate mobile URLs, because then you have the problem that we switch to the mobile URLs, and we focus on those. And things like the interlinking within the mobile pages, they should be with the mobile pages. The structured data URL should be pointing at the mobile pages. The hreflang link should be within the mobile pages so that each type of site is linked within each other. So that's pretty much the main change that is happening there. And since a lot of people are on responsive design or use dynamic serving with the same URLs and just slightly optimized content, that's a lot less of a problem. And if there is an m.subdomain, for example, or something similar, the whole well-alternative advice, all that stays the same. The rel canonical and rel alternate stays the same so that we have the connection between the mobile and the desktop pages. But within the mobile pages, the link should be within the mobile pages. So if you click on something as a user, it should go to the mobile page. Obviously, the structured data links that you have on the page should also be to the mobile pages. hreflang, the different language links, should also be for the mobile pages. Is it safe to say if it's the Nexus 5x? That is the Google mobile bot or? At the moment, I think that's what we use, yeah. But in the past, we have changed that over time to match what is currently kind of a common device type so that Googlebot is similar to what the average user would see when they use a mobile phone. And I think at some point, the Nexus 5x will be so old that we have to say, OK, we shift to, I don't know, Nexus 9 or 10 or whatever Nexus version happens to be more popular then. I don't know. That will happen at some point. All right. Any questions from you all live here? I have a quick follow up on this discussion. Is it normal or not normal? If mobile first is on one domain to see spikes, like huge spikes in traffic from mobile, would that happen? Or is it not going to be like a lot of changes in visibility overall when mobile first is switching on? I don't think you would see it in change of traffic from the user side for mobile first indexing because we still show the mobile URL to the mobile users and the desktop URL to the desktop users. So from the user point of view, they really don't see a big change other than maybe the titles or the snippets match what we show on the mobile version. But there are a few big sites that they've seen a huge spike in visibility and also in traffic from mobile alone almost overnight. Would that mean that they basically like switch over to mobile first or that there's no correlation there? I don't think there is any connection there. I think it's just we have some algorithms that try to show the right results on mobile devices and probably they have a good mobile page and that's a good reason to show them more on mobile. But the indexing part is more a technical thing that happens before we actually show that in the search results. Can make sense. All right. Let me see some of the submitted questions. If there's some that got a lot of votes, we can take a look at those. So the first one I see here, we added a noindex tag to pages with thin content. Should it be noindex nofollow or just noindex? From my point of view, that's kind of easy to answer because noindex is essentially a sign for us telling us that we should not index this page. And if we don't index that page at all, then we don't process the links on there either. So noindex nofollow is kind of the same as just normal noindex. So that's kind of an easy one. I don't know, any more questions from you all in the meantime? Maybe a question about geoblocking. Because we as a publishing company, we have problems with legal cases from Germany. And we are located in Switzerland. And so the lawyers think about let's geoblock Germany. And the question is, it will influence somehow crawling, probably, or indexing. And it will also influence the uses which are in Germany. And so what's the best thing to do to avoid legal cases from a specific country, for example? I can't give you legal advice. I just need advice to be good in Google. I'm still good in Google. So I don't know if this is useful for legal reasons or not. But in general, when it comes to crawling, Googlebot mostly crawls from the US. And Googlebot wants to see the content that users in that region would see. So if Googlebot crawls from the US and US users are allowed to view your website, then we can index the website. If you block other countries, then that's kind of up to you, that's essentially your decision. We've looked into crawling from other countries in the past. But that didn't lead to anything really useful for the most part, except for some countries that traditionally block US users. So for example, in Korea, I think it was a problem in South Korea that a lot of sites blocked US users from accessing their pages. And if US users are blocked, then Googlebot can't index the content at all. So we try to crawl locally in places like that. But within Europe, we haven't noticed that we have any additional value by crawling from different countries. OK. That's interesting, because it would actually be super helpful from that perspective to be crawled from the local countries, because we tailor the content, the local user. So if Google shows up, we go, hey, it's like a non-authenticated user and say, hey, here's some stuff that might be interesting to you, but that's only a subset of the total content. That was on my list of questions to ask, actually, whether that was something you're expanding, but obviously not a concern. The big problem is that there are a lot of countries and a lot of locations. And if we crawl every website from every country, every location, then the website won't have room for normal traffic. So we try to be reasonable there. And really, we've kind of experimented with this a little bit. And it's really only useful in very limited number of locations. So that's kind of the thing there. When you have content that is personalized to the user's location, it's also important that you have a lot of content there, which is generic as well, because Googlebot usually crawls from the US. So if you only show US content to Googlebot, then we'll only index the US content. And if someone is searching for local content and you never show that to someone who is visiting from the US, then we wouldn't be able to rank your site for that local content. So usually what I recommend is that you have some section that is kind of personalized for the location and some section that just has generic content about what else you do. And obviously, you can link to more specific content. Like if you have a page for Zurich, then you can link to the page for Zurich. And there you show the content for Zurich. But at least on the home page, on the main page is kind of this mix of localized and generic. And sometimes we see that as a problem with bigger sites as well. I'm sure that happens to our sites, too, where we try to be smart and personalize the content right away. But that just ends up with the website being indexed for, I don't know, San Francisco. Sometimes you search for Yelp and it shows Yelp, San Francisco. You're like, well, I'm not in San Francisco. Why are you showing this to me? So those are kind of the situations where that comes up. I don't think I know the answer to this already, but you're not OK with or are you OK with doing something slightly different for Googlebot then when we see Googlebot coming in to get around that personalization problem? That would be cloaking. Yeah. Yeah. OK. I kind of guessed that with the case. Yeah. Yeah, sure. OK. I mean, I think for sites in Europe, it's a little bit easier because US is such a special market anyway that you can say anyone from the US gets this special content. And then it's not a case that you're saying, well, Googlebot gets this content that is like anyone from the US gets this content. And that's not your primary market, so you might not care too much about what normal users would see. Yeah, I think that's the way forward. I don't know. Sometimes a bit tricky. I think that the hard part with a lot of this localized content is when you have to block users in the US for some reasons, maybe for legal reasons, then, of course, you're also blocking Googlebot, which for us who are located outside of the US might be, well, I'm letting 90% of the people go to my website, except for this group of people in the US for legal reasons. And it would feel strange not to be indexed at all, but it's kind of from a technical point of view. If we crawl from the US, we see the content from the US. All right. What else can we chat about from you all here, live in the Hangout? If I may. We have a fairly big e-commerce with half a million products, and so we asked the SEO consultancy, how can we prove our SEO. And they said that our best next thing would be a sitemap. So we built a sitemap that covers, I would say, 90% of the website, but mostly the products, the pages of the products. And what we did is we submit a new sitemap, a new updated sitemap every few days to Google. And it seems like it's never catching up. It's never indexing all the pages. It seems kind of stuck, and we don't really understand why. And is there any way to kind of, does it even matter? Is there any way to improve that situation, to know how to unstuck? How to get that unstuck, I guess, yeah. So if you're looking at the sitemap files in Search Console, you have information on how many URLs are indexed from those sitemap files. The important part there is that we look at exactly the URL that you list in the sitemap file. So if we index the URL with a different parameter or with different upper or lower case or a slash at the end or not, then all of that matters for that sitemap file. So that might be an issue to kind of look at there. Another thing that sometimes helps is to split the sitemap files up into separate chunks, kind of logical chunks for your website, so that you understand more where pages are not being indexed. And then you can see, are the products not being indexed or the categories not being indexed? And then you can drill down more and more and figure out where there might be problems. That said, we don't guarantee indexing. So just because a sitemap file has a bunch of URLs, and it doesn't mean that we will index all of them. That's still kind of something to keep in mind. But obviously, you can try to narrow things down a little bit and see where you could kind of improve that situation. Thanks. All right. What else can we chat about from you all? I'm interested in behavior of Googlebot. But as Googlebot, we have the same way when I'm doing ad to URL fetches Google and when Google crawler is just coming to update. Needed. So for the most part, it's the same. Yes. So what usually happens is we crawl the pages individually. You can also use a tool, the fetches Google tool, in Search Console to crawl those pages. The rendering, so if it's a JavaScript site, that's something that we do as a second step. So if you use fetch and render in Search Console, then we try to render it right away. But usually, that happens a little bit separate. And that's also a really good way to see can Googlebot actually see the content on those pages. Especially if it's a JavaScript site, then the HTML file itself will be very minimal. And the content will be pulled in with the JavaScript. So a normal crawl without user interaction, does it have any session, time, duration, or? No. So we try to crawl without any state information so that every time we access a URL, it's like the first time a user accesses that page. So there's no time on site that is tracked or no Googlebot is clicking on this link and then this link and then this link. But more, we collect all of these URLs and we crawl them all individually. So as I would like to ask about, as of our Evergreen content, we have a big newspaper and we have a lot of old content, but we are named Evergreen content. So we decided to show the data, so actually publication data. And so it's just... And like the make republishing, recycling of old content without the data. Is it good or should we add it again? Here because maybe some content was published back in 2012 and so we don't want to show it to the reader. Okay, so how do you republish it then? Do you put it on a new URL or? No, in the old URL, we like challenge publishing data and so it is like coming up to as a homepage and then I sent it to Googlebot. Like I changed something and I was like, maybe new pictures, some interminking. But like I actually republishing the old content. Okay. So is it like, but without data, original data. So without the date, the old date? So no date anymore in the HTML. Okay, so I think if you change something and you update a page, then that's kind of... That's fine. Some people have the original date and the last updated date on the article or on the page that they publish. That I think helps the user a little bit to understand what the context is of these pages. But essentially that's up to you. I don't know how Google News would handle that. I believe they have some guidelines on kind of republishing older content with a new date that might be trickier. But for web search, it's a page and if you update it, then you can tell us that you updated this page. That's kind of normal. John, can I have a follow up question on this one? Sure. So a few years back, Gary did a really nice presentation on Googlebot that's super weak and he was talking about there is like a separate thing called scheduler from the actual boot that actually decides when a page will get indexed. Is there a way to actually speed up once a page is already in the scheduler? By, I don't know, internal links or sitemaps or anything else? Or once it's there in the scheduler, it's basically all settled when it's gonna get crawled. Yeah, so we have something like a scheduler that is before our crawling to try to figure out when we need to re-crawl things. So especially if something hasn't changed for a while, we still want to check every now and then to see are we missing anything. And that's where this scheduler comes in. And if we have other signals that tell us that this is more important to re-crawl faster, then of course we'll try to get it faster. So things like within a sitemap file, the updated modification date, internal linking, I think helps us as well, especially if something is linked from the homepage, then we think, oh, this is probably more important than a random article that's elsewhere on the website. So especially for news websites, that's kind of the normal approach as well. Where we look at the main category pages and we see, oh, this article is linked here or it's new, then that really helps us to understand, okay, there's some reason why they put this on the homepage, this link. So maybe we should check it out and see if we're missing anything. And also number of internal links matters. So for example, if you add one site-wide, one link site-wide, will that speed up the re-crawling process? Um, maybe. I don't know if that would always apply. So the internal linkings help us to understand the context of the page a little bit better and how important that page is, which can help us to understand that we should be checking this page more frequently. But it's not necessarily a sign that something has changed now. So just by adding another internal link from some random other page, doesn't mean that we think, oh, we need to re-crawl this right away. It might help us to understand that this is not just some random article, but it's actually more important that we should re-crawl it a little bit more frequently. Also. How much notice do you take of the crawl-frequency attributes at the site map to XML? Is that like, I've always thought it's more of a suggestion than anything, but I mean, is that something, is that a best lever for encouraging re-crawl? We don't use that at all. No, at all. We only use the date in the site map file. Got it, okay. So. Always use. Yeah, I know, I know. But it's something where we've noticed that a lot of, I mean, it's kind of the update frequency which you're specifying there. And if you're also specifying the date, then that's kind of the exact like last update date. And we've noticed a lot of people just have a site map file set to everything to daily or everything set to always, always changes all the time in the hope that Googlebot crawls it more frequently and then ranks it better. But that hasn't been that useful. Also the priority in the site map file, we don't use that at all for web search just because nobody really knows what they should specify there. And a lot of people end up just putting priority one for everything and saying, everything is important. And then in that case, I don't know, nothing is important, we can't tell. We have to figure it out anyway. RSS feeds, I've seen them, I crawled quite often on WordPress sites, does that help? Yes. RSS feeds help us to find those new pages. We process them similarly to site map files. With RSS feeds, you additionally have the ability to use PubSubHubbub where you can push the updates to us directly, which is almost pretty much the fastest way to get those updates to us. Do we have any benefit if we use a site map file? Yes. Also for smaller sites. I think for smaller sites, it's more a matter of organizing things in your head around the site map file that is useful because you think about which URLs do I really care about? What is the exact URL that I need to use for internal linking, for kind of like using an advertisement or using elsewhere? You think more about what are the exact URLs and how should the crawling look from the website? But for Googlebot, for indexing, I would say maybe like under, I'm just making up a number, maybe 100, 500 pages, a site map file doesn't really change anything for crawling. But I think, especially for smaller sites, it's useful to really think about how Googlebot looks at a web site. I have a good internal linking on my page, so I don't have any benefit if I doing the site map XML file also. Well, the site map file helps us to crawl a little bit better, but if you have a small site, we can crawl that good enough for the most part. I mean, it's not that you'll have a disadvantage if you use a site map file. So if you use a CMS that has a site map file by default, that's great. All right, any questions from you all as well? Otherwise, maybe I'll look up to see if we have more. I still have one. No. Okay. We run also a competition website and obviously competitions like Win A Car and stuff like that, after two weeks or three weeks, you cannot take part anymore. So they're not really interesting for the user anymore. So we put them then on no index these single competitions because they're not of value, but I find it takes Google a lot of time to remove them from the index. And the thing is also when we put them on no index, we remove them from site map XML. So would you recommend us doing it? Or should we keep them in the site map XML maybe for two or three weeks so that you can realize that it's no index now? So you can keep it in the site map file and if you change it to no index, then that's like a change on the page. You can change the date in the site map file. So that can help. There's also a meta tag unavailable after, I think, where you can say this page will be unavailable after a certain date if you know ahead of time when it actually is removed. That can help. Can I have a follow? Sure. There is a new penalty for rich snippets, event snippet using. This is a case where we could use this event snippet to have nicer search results. So it should be, I shouldn't have to do it for the rest of the day. Stuff like that. Yeah, so if it's not really a physical event where someone goes there and it's something where people actually go or you would send out an invite and say, hey, everyone come here. We're doing a big opening of our new business or something, then that makes sense. But if it's like, we're doing a contest next week, then that's not a good idea. It has to have a location, I think, in the markup. I did it wrong, too. I think it's something that a lot of people ran into and we just wanted to be really clear. And I think a lot of people are not trying to spam Google by doing this. They're just like, well, my sale is kind of like an event. Maybe I should use this markup and it looks nice. Yeah, I think that's, I mean, that's always tricky if the competition is doing it, but just because they're doing it doesn't necessarily mean you can do it as well. We have, we lost, and they have snippets like that. And we have just meta description, so. It's always frustrating, yeah. It's always frustrating, yeah. Okay, let's see. Okay, so some of these questions are really long, which are kind of hard to go through. Would you recommend expanding the meta description? Because now we changed it to, you know, longer. Yeah, I don't know how much we changed there because we've always been experimenting with the snippet length. And sometimes we show more, sometimes we show less. And that's something that I think is sometimes a bit tricky. What I would recommend doing is taking the queries that you see in Search Console, where people are actually searching and trying those out and then seeing what your pages look like with the snippet. And if you're okay with the way they look, then that's fine. If you would prefer to have something different there, then that's something that you can change in the meta description. So it's not the case that I would say everyone has to make a long description. We show a different snippet depending on the query that people search for anyway. So if someone is searching for something that's only within the text of your page, then we try to show that content there anyway. So even if you have a long and nice snippet or meta description, it doesn't mean we'll show that as a snippet all the time. But I would double check and look at it and see if it makes sense or not. I think it's also one of those things where it can change over time. And it can be that we notice, oh, we're showing too many, too long snippets and nobody has a chance to go through the search results anymore. Maybe we should make like one short, one long or maybe one third long and two thirds short in the search results page. So if you go out and spend a lot of time on changing all of the descriptions on your site, then maybe we won't actually show that in the long run. So especially since the snippet isn't a ranking factor, it's more a way of showing users what the page is about. It's something where you kind of have to balance your time and think, is it really worthwhile to like go through 100,000 pages and write a new description? Or should I just like keep this in mind when I'm making new pages and not worry so much about this limit of 160 characters or whatever it used to be? Because I think especially when we make changes like this, they tend to go back and forth a little bit until we find the right balance. Yeah, and what are you doing now with the answer boxes? As you know, it's like this big issue with the voice searches and answer boxes. So should we work on is it a future for this kind of results on organic? Because now, you know, like search results, so organic, we are at desktop and at mobile, we're like really very, you know. Yeah, so I think with the featured snippets, it's tricky because you don't really know what will happen there, but it's also something where the behavior of people searching, that has changed over the years. Where in the beginning, people would just search for one keyword, another keyword, and then we have to show a list of pages because we don't know what they actually mean. Sometimes people now search with a full question. And if they search with a full question and we have a page that answers this question completely, then maybe it makes sense to show a part of that page as a bigger snippet, maybe on top of the search results, maybe on voice as something that we actually say to the user. And these are kind of behavior changes that are happening across the board. And it's not so much that for SEO reasons, you should change this, but rather to think about, well, users are searching differently. They're finding my content in different ways. Maybe I need to adapt my content to match what users are actually doing. Over here. I would look into it. I mean, it's something I notice particularly when, so sometimes I go to schools and talk about search. And when I see how kids search on Google, on the internet in general, they search very differently. And when I see them type a question into Google, I'm like, oh, this will never work. And then it does bring results to them. But these are changes that, for them, that's kind of the natural way to interact with Google now. And that generation is going to get older and going to start buying things. And if they continue to search like this and all of your optimizations are for keyword, keyword, keyword, then maybe your pages are seen as less relevant. But it's also something where we have a lot of algorithms that try to understand what the query actually means and to try to map it to existing pages. So it's not that you have to write content for every type of how or should or can I do this question or must I do this. It's more a matter of trying to figure out what users want and trying to answer that. I have a question on local. I mean, we do relocations in all of Switzerland. We're on a moving company by the marketplace. And we've noticed that a lot of the computation they have, let's say, hundreds of pages that are basically the same, but they just change the location and sometimes even the domain names. So should we try to optimize for local or just fill out some spam reports and hope that they will be punished? Yeah. So I think you noticed that it's more something that we would see as spam, probably, as doorway pages. And for the most part, we try to figure that out also algorithmically, but sometimes we don't. So if you want, you can give me some of those and I can take a look at that with the web spam team. But it's something where I wouldn't try to just match any spammy behavior that competitors are doing, but maybe think about what you could do instead to really create something useful for users, maybe in individual locations. Maybe you do have something unique that you can add there that makes it more than just the doorway page where people are cycling the keywords. I think that's always a tricky situation when you see your competitor ranking well and they're doing all of these spammy things. It's like, are they ranking well because of those spammy things or are these just spammy things that they did in the past? And Google is maybe ignoring them, but they're still ranking well. That's always kind of a tricky, frustrating situation. I have the feeling in Switzerland, Switzerland is a very small country. So you don't have for every combination enough results. Google don't have 10 good results to show. So sometimes Google shows also some stuff like that. It's my feeling, I don't know. That can be the case. Yeah, I mean, especially if people start to optimize for even the smallest towns, then. Yeah, exactly that. You are looking for, I don't know, moving company. Moving company? Yeah. In a small village and there is no moving company. Yeah. What should Google show? I mean, we try to understand that a little bit with the local business entry where you can, I think you have like a range that you can specify. That can help. But it's sometimes tricky. I mean, maybe it's worth listing those cities and maybe a dropdown and a form that you have on your site so that at least we know that you accept those cities as well. Do something like this, I think. But then you need a physical location to get the postcard or whatever. Yeah. No. But I'm happy to take a look at these kind of reports as well. So that's something I don't handle personally, but I do talk with the website team a lot and the ranking teams. So it's always useful and better. Maybe, especially in smaller countries like Switzerland, maybe it's something that's kind of generic across other countries as well. Is that good? But for those doorway pages, it's only the on-page setup or it's also other signals, like, I don't know, traffic sources, it's only organic. That's also a negative signal or things like that. But it's just like an on-page setup that makes it the doorway page or not. So what makes it a doorway page for us is if there's no unique value for these individual locations, if you're just swapping out the keywords, and especially if everything is funneled towards the same business and the same kind of service that you're providing. That kind of makes it something where it's like, well, you're just targeting these specific keywords, but actually, you're providing something very generic. That kind of makes it a doorway page for us. And usually, it's pretty obvious because those that do these kind of sites, they do them for every different possible city. And you can see, like, search for one city name that's like on the other side of the country, and they have a page for that. And there's no possible way that they could provide service in that part of the country because it's just too far away. And those are kind of the situations where we can really tell it's really way over the top. On the other hand, if you have maybe a handful of pages and you're saying, well, we focus on services in this canton and this part of Switzerland, this part of Switzerland, then that's something where, from our point of view, that's less of an issue. You don't have hundreds of pages that just swap out the keywords. You're really trying to kind of narrow things down, saying, well, in this region, we have maybe this amount of trucks or these number of people who are available to help kind of to make it clear that it's really unique to that location, that you have some content, some service that's kind of available there. So does this mean that for job sites and real estate sites, this doesn't apply? Because it can provide value for, like, each single location, city-wise? For job sites and real estate sites, I mean, that's something where you have individual listings for each of these locations. So that's actually, you do have kind of content there. So if you're looking for jobs in one city and you have five jobs in that city, then that's a unique piece of content there. On the other hand, if you have a landing page that says jobs for this city, and it's like we have jobs in the whole country and the page is just optimized for that city name, then that's more of a doorway page. That's really great, because there were a lot of debates for real estate sites if they're falling into the doorway page issue or not, and was a threshold. But based on what you're saying is that if listings are different and specific for those locations, I mean, there is no way that it can be a doorway filter applied. Yeah, yeah. If you have unique content for those locations, that's perfect. Great, thanks. All right. No more questions here? I'm not a question. No more questions. I went about this last week. I think you posted on Twitter or something like that, that filter content, so text in footer will be, will have a negative impact on your rankings in the future. That's right. I don't think I would have said it like that. I mean, one thing we often see are sites that have some content, like normal content on top. And then in a tiny font on the bottom, they have this big article of text, which when you look at the page, you know, well, they just put this text there for search engines. It's not something that users would see. The case, in my opinion, is a category page on a big shop or a commercial site. So users are actually looking for products. But there are always these huge text. Yeah, I think for the most part, you don't need to. Sometimes it helps us if the individual items are just images and there's very little text. It's like you search for, I don't know, running shoes. And you have a page full of running shoes. But for a search engine that just looks at the text, it's like a lot of images and links. I don't know what this page is about. Then having some descriptive content, either on top or on the bottom of the list, I think that's perfectly fine. But to kind of have a Wikipedia article on why you should go running on the bottom is kind of over the top. And usually what happens with our algorithms is we start to see that as keyword stuffing. And then at some point, our algorithms are like, oh, these keywords are being used so often on this page, I need to be careful like how we actually rank this page. And then the algorithms realize, actually, these keywords are not used in a natural way. And then they might say, well, I don't know how relevant this page actually is for those keywords. So I would try to make the pages so that they kind of have natural content, so that users, when they go to these pages, they understand what it is that they're seeing, especially maybe users who can't see the images, kind of make it in the sense that if someone looks at the page who isn't able to see the images, can they still tell what this page is about? And sometimes it's enough to have the titles of the products just linked. Because if you see, I don't know, this type of running shoe and this type of shoe, and then that's already, you have those keywords on those pages, we can already tell what your page is about. I have one more question on AMP tracking. I mean, if someone comes from Google on an AMP page and then they come to your site, this is shown as referral often. And I think that's one of the tricky problems. Because the AMP page is hosted on the Google AMP cache. And technically, when you click a link, it's like coming from an external page instead of being seen as an internal link. I don't know, your new sites probably have more experience with that? I think it's like that's where the next turn will occur. I don't know if there's a way to change that. Maybe there's a way to tag those links and say, this is from AMP or not. I don't know. We had to basically make it sort of ping the server. So the content that was loaded on the page was ping the server to say, hey, this is a real session, so that you knew, because we do all our tracking by log file analysis, so we had to force it to think of that as a referral rather than the subsequent click being a referral from Google. OK, maybe with an iframe or an image or something like that. I know, it's something like that. Yeah. I don't know exactly how. It was like that. OK. Yeah. There is actually a Spanish guy. I think he's called Cristiano Oliveira. He did a super blog post on this and how you can solve the issue. So there are solutions. What was the title for the new giveaway? Don't remember exactly what the solution was, but it's an excellent blog post targeting specifically that problem. OK, I think there was a question from here. No? No live question? OK, let me see what else was submitted so that we don't ignore them completely. Looked at the meta description. If I had a contact phone number in the meta description that's shown in the search results, how will ranking and click through be affected if the caller doesn't actually visit the website? So I guess this wouldn't really change the ranking of the page if someone sees a phone number directly in the search results and calls it instead. You would still see it as an impression in Search Console. So kind of be able to track it that way. But obviously, Google doesn't really know what users do with a phone number that's just provided in a snippet. I don't know if there's a way to use maybe structured data to specify the phone number so that it's actually linked properly. That might be something, especially for, I guess, local businesses. We'd also have the local business entry where we would have the phone number there as well. A search term in Google Trends that has a negative trend since 2004 doesn't mean that the actual search volume has gone down. Or how does Google Trends measure that? I don't actually know. So you probably need to check with the Google Trends folks on that. It's very possible that maybe the total volume of searches has gone up so much that that share has gone down. But actually, the absolute number is still stable or has gone up as well. I don't know how Google Trends actually tracks that. Can take a related term and check with this to get a feeling. That's a good idea. Check with other terms. Talked about the meta description. We're running a CCTLD domain for Canada. And we're planning to expand our base to another country with another CCTLD with the same content. Will it affect the SEO? I think anytime you create a second website with similar content, you're kind of competing with yourself, in a sense. So it can definitely have an effect on your original website's traffic. Because suddenly, instead of just your site being shown for those queries, we have the choice between this country site and the other country site, and maybe we'll show the other country site instead. So it's not that the total traffic for your business would go down, but it might be per website that you do actually see this difference. And usually, that's a good thing, because then the more local traffic goes to the more local website. Let's see. I think we talked about the sitemap. Yeah, I don't know. Maybe we'll just open up for more questions from you all. Keep it easy. I've got a question. I'm seeing lots of paginated results being indexed. So we've got the rel.prev.next obviously in there. And I kind of see it as a mental model of a canonical tag. Like it's basically chronographs into the root of that category page. But even so, we still see the paginated values indexed. And obviously, that multiplies it pretty quickly. Am I right in thinking that's the pagination? The rel.prev.next is a good canonical tag. And if so, is there anything we can do to really double down on that and say, hey, please don't index our paginated values? You can explicitly tell us not to index the paginated versions by having the canonical tag actually there, our paginated versions. But what happens then is we also don't look at the links on those paginated values. So sometimes that's OK. Sometimes it's OK to have the first page just indexed and only the first page results linked within the website. If you have enough different categories, then all of your products will be in one of these categories on the top page. So maybe that's an option. Maybe you could also say, well, I'll just do a cut-off and say, after page five, I'll just no index everything or rel canonical everything to maybe the last page. That might also be an option. So we do sometimes show the paginator results in Search as well. And it's not so much that we say, well, we don't trust your rel next to rel previous. It's more that we think, well, maybe page four is the most relevant result for this specific query that the user is currently submitting. I follow up on that. Quite often on season page two, no index follow on page two for the rest of it. So if I listen between your words, I think you would not recommend necessarily doing that. So it's kind of tricky with no index, which I think is somewhat of a misconception in general with the SEO community in that with a no index and follow, it's still the case that we see the no index. And in the first step, we say, OK, you don't want this page shown in the search results. We'll still keep it in our index. We just won't show it. And then we can follow those links. But if we see the no index there for longer, then we think this page really doesn't want to be used in Search. So we'll remove it completely. And then we won't follow the links anyway. So a no index and follow is essentially kind of the same as a no index, no follow. There's no really big difference there in the long run. What about using the View All page? The View All page is also an option. If you can do that, that might be something to index. I think we did a blog post about the different types of the page. I think for most sites, the paginated indexing just more or less works. It's not something where we see a lot of problems with. It's trickier when you have filtering in addition to pagination. When you can filter for type of product, and size, and color, and all of these things, then all of these kind of faceted navigation, we call it not canonicalized or handled correctly. All right. Any more questions from you all? I can take it as possible with a follow-up on this pagination thing. So would you say that Google doesn't think it will account like the number of results, and that's not like a ranking signal? So it's OK if you have the first page. It's really good, and that's more than enough. Or it's OK if you have like, I don't know, 50 pagination pages for that Google will see as a quality plus, and it will, at the end of the day, rank it better just because it has more results than competition. I don't think we count the number of items on a page. So it's not that we say more items is better. So just because you have a lot of paginated pages doesn't make it necessarily better. So I wouldn't worry too much about the number that you actually show there. Cool. Thanks. Hey, John. Hi. Hi. So in the pagination section, is it better to have, suppose for page one, I'm using different title tags. For page two, should I use different title tag or it should be similar to first one? That's actually a good question. So we do look into that as well. So if you have a paginated set and the pages look like they belong together, then that's a clear sign for us that this is actually a set. So if you have the same title or the same title and then like page one, page two, page three, that's something that's useful for us. Whereas if the titles are very different across the paginated pages, then we might assume that actually these are very unique pages that we shouldn't be grouping together as a list of items. So the combination of a group of pages, do you consider for ranking? Means I have five pagination pages. So those accumulate to one single query or multiple queries related to each other? So usually what happens with paginated pages is we try to understand the page as a set a little bit better. So the context of the pagination helps us to understand that these pages belong together. But it's not really the case that by having a paginated set, you will rank better. It's more that we understand the page a little bit better and that we can show it to the right users a little bit better. OK, OK, thank you very much. So on the title tag things, an ideal state might be like the generic title of the category plus like page one, page two, or yeah, OK. OK, pushing it, thank you. Yeah, so what sometimes happens is people just take the title of the first item and use that as a title for this list. And then for us, it looks like these are actually very unique pages that shouldn't be treated as a set of pages that belong together. But I think with regards to pagination, it's also important to keep in mind this is very kind of subtle that we try to understand the context of these pages. It's not like the most critical thing on your website. So if you can set this up properly within your CMS and just let it run, then that's perfectly fine. If you can't set that up without hiring a developer who's going to spend a month of time to implement this on your website, then probably there are other things that are more important on your work. OK, I think time-wise we're pretty much finished. Maybe we can take one more question from any of you. Mr. John, I have a question. OK, from you, you get the last question. Actually, in the meta title, if I am using date, does it affect the ranking? In what? Which are you using? Meta title, using the date in the meta title. I don't think that changes anything for us. So we do use the title for some amount of ranking, not a lot. But primarily, we use that to show in the search results. So if the title is important for you to be seen in the search results, maybe that makes sense. If the date in the title just changes every day anyway, then maybe it even looks weird. So I kind of think about it more. What value do you think the user would get from the date when they see that in the search results? Actually, the page is about the gold rates. If you are searching for the today's gold rate, then it will appear on that one. We try to pick up the date in different forms, though. So you don't necessarily need to put it in the title tag. If it's on the page somewhere, that helps us as well, too. OK, fine. All right. OK, one last one. OK, I also ask a question in the online forum. The question is about the danger of cloaking. So if you use a plug-in cache in a template system in order to deliver pages faster, and there are slight differences in the code. But there is no means, for instance, to really show the Googlebot another page than for the real user. The only intention is if a page is slightly modified, this will be re-delivered after a couple of minutes. But if it stays the same, it will be delivered from a cache. Is there any danger that Google might regard this as cloaking? I don't think that would be a problem. So we might recognize the difference from crawl to crawl, but I don't see that as being a problem. If the main content is essentially the same, if you're just tweaking the JavaScript, or in some cases, if you're doing AB testing on a page, that's perfectly fine and normal. So that's not something I would worry about. OK, thanks a lot. All right, let's take a break here. It's been great having you all. Good to have a bunch of local people here as well. Since everyone here speaks German, maybe we should do this for the German Hangout as well. Every now and then, at least. Thank you all for joining live. And thanks for submitting so many questions that we didn't get to. If there's something important on your mind, I'll set up probably another round of Hangouts next week, just before the holidays. So you can add them there, or you can post in the Webmaster Help Forum as well, or ask us on Twitter. And we can try to help solve those problems too. All right, so thanks again for joining. And hope to see you all again in one of the future Hangouts. Thank you. Thank you. Thank you.