 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hours Hangouts, together with webmasters and publishers all around, questions from websites for a web search. First of all, I want to wish you all a great new start in the year. I hope things have been starting off really well for you so far. And I hope it'll be a good year for all of you as well. From our side, I imagine there will be some changes happening. Some of those might be kind of easy to take, and some of them might be a bit trickier. For example, Google Plus is going away. So we'll have to figure something out to make sure that we can continue to run these Hangouts in a way that you can submit questions. I think there are various options we can look at, including some that are pretty easy. Then on the Search Console side, there are lots of changes happening there, of course. They've been working on the new version. I imagine some of the features in the old versions will be closed down as well over time. And some of those might be migrating to the new Search Console. I'm sure there will be some sections of Search Console also that will just be closed down without an immediate replacement, primarily because we've seen there's a lot of things that we have in there that aren't really that necessary for websites, where there are other good options out there, or where maybe we've been showing you too much information that doesn't really help your website. So an example of that could be the Crawl Errors section where we list all of the millions of crawl errors we found on your website, when actually it makes more sense to focus on the issues that are really affecting your website rather than just all of the random URLs that we found on these things. So those are some of the changes that I think will be happening this year. And like I mentioned, some of these will be easier to move along with. Others will be a bit trickier, but in any case, there will be changes. And I'm sure there will also be changes around web search with regards to ranking and all of these kind of the way that we show the search results. That's kind of normal. And that's a sign that people here are still caring about Search and working hard to make things better. So all that said, I hope it works out really well for all of you this year. And looking forward to a bunch of really great hangouts with you all. So with that said, there are a bunch of questions that were submitted already. But like always, if any of you want to get started with a first question, feel free to jump on in. Can I ask a couple of questions? Sure. So these are the questions about dynamic rendering that you talk about quite often. We had a very interesting, fascinating experience several weeks ago. And here's actually what happened. So we have a website that is very JavaScript heavy. It's written in React, and it uses server-side rendering. So what we found out was that after some changes on our website, some time passed, and then suddenly our index results in Google Search got completely broken. So we figured out that what was happening was that probably Google first read or crawled our pages that were server rendered. And then after some time, it executed JavaScript on our pages. And something went wrong in the context when the JavaScript was executed. And the page got broken during this execution of JavaScript. So I was actually wondering whether you can confirm that Google crawler would overwrite the results of its page that it indexed during server-side rendering with the result that it got during execution of JavaScript. Probably. Probably we would do that. So if you deliver a server-side rendered page to us and you have JavaScript on that page that removes all of the content or reloads all of the content in a way that can break, then that's something that can break indexing for us. So that's one thing where I would make sure that if you deliver a server-side rendered page and you still have JavaScript on there, make sure that it's built in a way that when the JavaScript breaks, it doesn't remove the content, but rather it just hasn't been able to replace the content yet. OK. So our solution, the thing that works for us, and I'm wondering whether we did the correct thing, was that, as you say with dynamic rendering, we tried to serve different variants of the page to Googlebot and to regular people. So we removed all JavaScript completely from the pages that we were serving to Googlebot. And it kind of seems that this works. And we had lots and lots and lots of broken pages in Google Search. And now we are getting somewhat closer to normal. So I was wondering, is it OK for Google to get a variant of the page that is not quite the same than what the normal user is getting? It should just be equivalent. So if you're doing server-side rendering and all of the functionality is in the static HTML version that you serve, then that's fine. Whereas if you do server-side rendering and just the content is rendered, but all of the links, for example, don't work, then that's something where we're missing functionality and we might not be able to crawl it as well. The other thing to watch out for, depending on how you remove those JavaScripts, is that a lot of structured data uses JSON-LD blocks. So if you remove those two, then we don't have that structured data. No, we still have that. We just removed the links to the scripts. OK. So that should be fine, right? Yeah. And a question related to that is that previously, I don't know, some years ago, there was the idea that a search bot should have the same variant of the page as what a user gets. Otherwise, this situation was described as cloaking or something when a bot was getting something different from the user, and it was supposed to be bad. What's the situation with this now? That's still the case. I mean, with dynamic rendering, what you're doing is providing the equivalent content in a different form. And in a sense, that's the same as you often do with mobile pages where the desktop version is shown and the mobile version is a slightly different one. And the equivalent content, the same functionality, it's just kind of served in a different way. So that's more how we see it. It's more like a dynamically served mobile page than actual cloaking. Cloaking for us is primarily a problem when the content is significantly different, when Googlebot sees something that's very different from what a user would see, either that the content is very different. Maybe it's like spammy and not spammy content or that the functionality is significantly different, that we can't crawl it properly, that we can't see the real layout of the page. All of these things make it hard for us to properly judge the page with regards to how a user would see that. OK. I see. Thank you. Sure. All right. Any other questions before we get started? Hi. I have one, if that's right, around mobile usability testing. So we noticed about a week ago that Google was flagging us for mobile usability errors around the content being wider than the viewport and clickable elements being too close together. For the last couple of instances, every single page we've retested comes back perfectly fine. It's only the validation that seems to fail. From what I can understand, that looks like assets aren't being loaded, but we can never reproduce what appears to be failing during validation. And we also noticed that through the logs, Google appears to be fetching assets that we haven't referenced for over five months. So it feels like it's trying to load a very, very, very old version of the page and can't find the matching assets. We've tried to patch it so any missing style sheets go to the current working one, but that still hasn't fixed it. Is this potentially related to the new changes coming in with the mobile first indexing? The last part is the easy one to answer. The mobile friendliness evaluation is completely separate from mobile first indexing. So that's something that's being done completely independently. The indexing side is more a technical matter of switching things over to a mobile crawler and using the mobile version for indexing. And mobile friendliness is more a matter of recognizing when a page works well on mobile devices so that we can show it a little bit higher in the search results. I think what you're seeing there is probably related to the things that you touched upon. On the one hand, it might be that we're looking at older versions of the page. When we index a page, obviously we index it in one state. And then we need to be able to test the index version. And depending on the time lines there, it might be that it takes a little bit of time for us to connect the two sides. So there could be some delay between the indexing of the HTML page and the rendering, which means we ideally still be able to render that page a little bit later. I don't know if five months is reasonable there. I think that feels a bit long. In general, I'd still recommend redirecting the old resources to whatever new resources you have, in particular if you use a versioned resourcing URL. So if you have a URL that has a version number in them and that version number changes every time you do a new push, if we can't access the old URLs to see some valid CSS, the JavaScript files that you use, then it's really hard for us to do rendering in a reasonable way there, in that when we render the page, if all of the resources don't work, then we see the static HTML page without the styling, for example. And then we don't know if it's mobile-friendly or not. So some kind of a redirect from the old versions to the new versions of the embedded content, that would be really useful. The other thing that I've seen here is that sometimes it's more a matter of us just temporarily not being able to fetch those resources, which is more a matter of we'd like to crawl more from a website, but we temporarily can't do that, because we have so many things lined up for that website and we don't want to overload it. And that's one thing I've seen sometimes trigger these issues in Search Console. And that, in general, is not a problem, because we can line those resources up and crawl them a little bit later. But our systems internally will flag it like we couldn't test to see if this is really mobile-friendly or not. And then Search Console is a bit too helpful and alerts you of all of these issues right away. So that's one thing we're going to work on to make sure that those alerts are a little bit more based on the stable version that we would use for indexing. OK, cool. Yeah, some kind of visibility and understanding into the point in, like I suspected it would be a style sheet load error, but we can't see that from the validation, but we can with, as in the live test is perfectly fine. So I thought this might be a recent issue, because I found other instances from people having the same kind of false positives. So I was hoping that there was some sort of easier explanation that would be a faster fix for us. But I guess we'll watch over the next couple of days and see how we go. Yeah, I think if your live test is OK, then you should be all set. I wouldn't worry about it. Awesome, thank you. Sure. All right, let's jump into some of the submitted questions. We should have time for more questions from you all later on as well. Or if you have any comments in between, feel free to jump on in. Let's see. The first one goes back to 2011, where Pierre Farr said that sites served with a CDN are not treated any differently than non-CDN sites. And the question is, is that still the case? And that is still the case. So for us, a CDN is not anything specific that we would explicitly call out and say this is something that we need to treat specially, but rather just the way of hosting a website. And that's something that has implications with regards to how fast your website can load, where it's available to the audience, and all of that, that's perfectly fine. I think the question kind of aims at the geo-targeting aspect here. It goes on, additionally, if I have a site hosted in a different country for my business using a CDN and I indicate in Search Console, the country, audience that the site is targeting, is that something I need to worry about? That's perfectly fine. So I think one of those aspects where a CDN makes a lot of sense, where you have local presences with your CDN that make sure that your site is really fast for local users. And that's definitely not something that we would be against. So that's perfectly fine. In general, when it comes to geo-targeting, we primarily use the top-level domain. So if you have a country code top-level domain, that's a strong sign for us that you're targeting a country. Or if you have a generic top-level domain, the setting in Search Console. So with those two, we pretty much have things covered. And if your CDN has endpoints in different countries, that's not something we really worry about. If your hosting is in one country and you're targeting a different country, that's totally up to you. That's something, a decision that might make sense from your side, maybe for financial reasons, maybe for policy reasons, whatever. As long as we can recognize the target country with either the top-level domain or the Search Console setting, that's fine. The worry here is also that this setting isn't available in a new Search Console yet. So what does that mean? I think it's just a matter of time until the setting is also available in the new Search Console. We definitely plan on keeping this setting. A question about Image SEO. We'll have a technical change in our shop that will change all of our image URLs. The compression of the images will remain the same. Does Google know this is the same picture or will we be losing rankings? Should we set up redirects for image URLs? That's a good question. So yes, this will affect your website in Google Images, so in the image search. In particular, if we see changes in URLs with regards to embedded images, then that's something where we will have to go off and first recrawl those images, reprocess them, re-index them, get them all ready for image search again. So if you just change the URLs that are referenced within your pages, then that will result in those images being seen as new images first. And with regards to the ranking, they'll kind of have to work their way up again. So setting up redirects, like you mentioned, that's a fantastic way to do that. Because that way we know the old images are related to the new ones, and we can forward any signals we have from the old ones to the new images. So that's really what you should be aiming for there. This is particularly relevant if a site gets a lot of traffic from image search. And it's something where image search is a little bit different than web search in that with images, we find that images tend not to change as much. So we don't recrawl them as frequently. So if you make significant changes in the image URL structure that you're using, it's going to take a lot longer for us to reprocess that. So in particular for images, you really need to make sure that those redirects are set up. And that's something that oftentimes you don't see directly because you load the page. It refers to the new image URLs. You don't realize that actually those redirects between the old image URLs and the new image URLs is missing. So if you get a significant amount of traffic from Google Images, make sure that you have those details covered. If I change my complete website theme, will my rankings be stable or will they fall down? I'll use the same content, the same URL path, the same images, but kind of the layout JavaScript and everything will change. So yes, this will result in changes in your website's visibility in Google. It's not necessarily the case that it'll drop. It can also rise. So if you significantly improve your website through things like clearly marking up headings, adding structured data where it makes sense, using a clear HTML structure that makes it easier for us to pick out which content belongs together, which content belongs to the images, all that can have a really strong positive effect on your website in search. So that's something where you could see significant positive changes. And this is one of those areas where when you're working on SEO for your website, where you can make a big difference with regards to how we and other search engines would see your site. So it's not the case that your rankings will always fall when you make significant changes like this, but they will definitely change. We're currently serving twice the content in the source code of our product pages, one block shows for desktop and the other one for mobile. One version is hidden with CSS depending on which site the user is on. Is that acceptable by Googlebot, or will it see it as malicious hidden content? So first of all, that's perfectly fine for Googlebot. We can deal with that. We can recognize when content is hidden and try to treat it slightly differently. However, it seems like something where maybe you're adding more complexity than you actually need. And where I suspect it's a bit trickier to maintain a website like that where you always duplicate the same content. So my recommendation there would be to try to find a way to use responsive web design to serve this in a way where you're not duplicating the content. That makes the pages a little bit smaller, so they load faster too. And then you don't have to worry about other search engines and how they might handle this. Again, from Googlebot's side, this is fine. It will probably result in both of these blocks being indexed for the same page. But in general, that's not something that would cause any problems. If a company has different locations in different states, is it better to have different sites for each location or to have one main site with all of the locations on it with each pages having different schema structure to support the local SEO? We want to target each state location. That's ultimately up to you with regards to multiple sites or one site. Personally, I recommend having one strong site rather than having it split up into multiple smaller sites. So maybe having one strong website for the company in general and then individual landing pages for the individual locations so that people who are searching for those locations, they can find them and see what makes these locations a bit special. But at the same time, you have one really strong website that is really easy to find in Google when people are searching for your company in general. So that would be my recommendation there. You can, of course, if you prefer to have separate websites for these, that's something that's an option as well. The thing I would watch out for here, though, is if you have separate websites, is to think about how you want to deal with shared content. So in particular, if you have informational pages about the products or services that you offer, if you're duplicating these across all of these different websites for the individual locations, then which of these pages would you like to have shown when someone is searching for something general, something about the product or service that you offer? And if you have that spread out across all of these different pages, then they're all kind of competing with each other. So it might make sense to pick one of these locations say this is the primary location and this is where I want to have all my general content index and to have the other kind of websites be a little bit more focused on the primary location and just listing their additional information there. So again, this is one of those things where if you have separate websites, you're almost tending towards a situation where you are sharing a lot of content and where you try to pick one main location anyway. So you might as well just make one really strong website and just have individual landing pages for the individual locations. After confirmation of using noindexxrobots tag on the HTTP header for XML sitemaps, how would it deal with crawling frequency of XML sitemaps having a noindex? So I think this is something that ended up being a little bit confusing to a lot of people unnecessarily. So in particular, for us, an XML sitemap file is a file that is primarily meant for search engines to be processed automatically. It's not meant to be shown in search. And with that, it's something that we would treat differently than a normal HTML page. So a normal HTML page, we try to process it and see what it looks like and pull out the links that are located there and figure out how often we need to recrawl this page. But a sitemap file is essentially a machine file from one server talking to another server. And that can be treated completely differently. So we can fetch the sitemap file as frequently as we need. Servers can ping us a sitemap file and tell us, hey, does sitemap file change? And we'll go off and fetch that sitemap file and look at all of the contents there and process that immediately. So that's something where a sitemap file is absolutely not the same as a normal HTML page on your website. Some sitemap generators have a fancy way of rendering the sitemap file. So it looks nice in a browser. And that can be a little bit misleading because it quickly looks like something like an old sitemap HTML page that you might have had. But in general, an XML sitemap file is machine readable. It's meant for machines. It's processed very differently than an HTML page. So all of the effects that you're looking at for normal HTML pages, they would not apply to a sitemap file. Using the noindexxrobots tag as a way to prevent the sitemap file from accidentally showing up in web search. But it has absolutely no effect at all on how we process the sitemap file for a sitemap process. I'm seeing we don't detect any structured data on your site in Search Console. It's a live website for two years. And that note has been there since the beginning. It should be part of our code. I can't see why it's not picking it up. What might be the reason? So probably someone would have to take a look at the individual pages there. My recommendation here would be to post in the Webmaster Help Forum with some of the sample URLs where you have structured data so that people can take a look to see, is this implemented in a way that Google can pick it up? Or is there perhaps something else that is playing in here that's making it hard for us to recognize the structured data on your pages? So that's something where I'd recommend having other people kind of throw an eye on the specific case there. I don't think there is anything general that could be said without actually looking at the pages themselves. Some issues regarding Search Console. Some websites in the new Search Console are not showing any link data, but they were showing an old one. I don't know what specifically you're looking at here. I'd be happy to take a look at what you're seeing here. I know the Search Console team is working on updating something with regards to the link reports to make it a little bit more consistent. So perhaps that's just something that's still being worked on there. The second question, people submitting XML sitemaps in the old one and showing pending, but the new one couldn't fetch. I'm not quite sure how you mean that there because the processing of the sitemap file is not done in Search Console. It's essentially just the reporting that we do in Search Console. So yeah, in general, if it works in one version of Search Console, it should work in the other one. I think with regards to sitemaps, that's also one of the areas where in Search Console we'll probably see some changes because we've moved a lot of the functionality around sitemaps to the index coverage report where you can select individual sitemap files and see the actual effect on indexing, how many of these URLs were indexed, which ones were indexed, which ones were not indexed. You can see all of that directly in Search Console, which I think is pretty neat. So I would imagine over time we would take the old sitemaps report and turn that off and try to make sure that we can move as many of the features as possible to the new Search Console. What's the official position when it comes to advertising CBD oil? I have no idea about advertising policies or Google shopping, so you probably need to ask someone from the Google shopping side probably in their help forum. But I have no idea about the policies there. How can I rank well on Google? That's a kind of a broad question. I don't really have a generic answer that covers everything that you can do to rank well on Google. I think if you're coming at it at this level with regards to I don't know what to do, I don't know where to start, one thing I would do is look at the SEO starter guide, which we've moved now to the Help Center. That covers a lot of the basics around SEO and how to rank well, how to make a website that works well on Google. There are also a bunch of other SEO starter guides out there that are not from Google, which are really good. So I look around and go through some of those guides and see the different aspects that are involved when search engines look at content, look at websites, and what they would find important and interesting there with regards to understanding how to show these pages to users. Is it OK to have a single site map containing the items plus images, or is it better to have separate site maps for items and images? Both of these approaches work. For us, what happens on a technical level is we take all of the site maps that we found for your website, and we combine them into one big pile, and then we process them there. So how you split that information up into separate site map files is mostly up to you. Is something that you can pick a way that works well for your setup, for your CMS, for your server, for your infrastructure, whatever you have there. So that doesn't really matter for us. We're using a website-filler startup called Ucraft. I've been following your Hangouts and was asking questions about ranking for Logo Maker. Let's see who on questions. There are zero issues on our website according to Search Console. We're providing fast performance and mobile and great UX. I'm unsure what to do to, I guess, help improve the rankings. So I guess this is always kind of a tricky situation. If you're working on your website for a while, then sometimes you focus on a lot of the technical details and you forget about the bigger picture. So what I would recommend doing here is taking your website and the queries that you're looking for and going to one of the Webmaster forums. That could be our Webmaster forum. There are lots of other forums out there where Webmasters and SEOs hang out. And sometimes they'll be able to look at your website and quickly pull out a bunch of issues, things that you could be focusing on as well. Sometimes that's not so easy. But I think having more people look at your website and give you advice and being open to that advice, I think that's an important aspect here. Another thing to keep in mind is that just because something is technically correct doesn't mean that it's relevant to users in the search results, that it doesn't mean that it will rank high. So if you clean up your whole website and you fix all of the issues, but for example, if your website contains lots of terrible content, then it still won't rank that high. So you need to, on the one hand, understand which of these technical issues are actually critical for your website to have fixed. And on the other hand, you really need to focus on the user aspect as well to find what are issues that users are having and how can my website help to solve those issues or to help answer those questions. And that aspect is sometimes a bit tricky. So that's not something that I'd say is always trivial. And for some of these niches, there's a lot of really strong competition from people who have been working at this for a long, long time. And that can make it quite a bit more difficult than something that has a lot less competition. So unfortunately, no simple answer to getting high rankings and lots of traffic. I have two websites, both started using HTTPS six months ago with two different results. We did 301 on both of them. Site A, which gets less than 10 clicks on the HTTP version. All the traffic is now on HTTPS. Site B still gets 1,000 traffic on the HTTP version. Is this telling me that Site B has some issues somewhere or not? So in general, it doesn't mean that there are necessarily issues, but the thing to keep in mind is that we do look at websites individually. So our algorithms don't necessarily say, oh, these are two websites from the same person. Therefore, they should be doing exactly the same thing. It can be that we still see issues with one website and that affects significant amount of traffic. It could be that we see the same issues with the other website, but because nobody is searching for those pages, you don't really notice it. So that's something where I suspect if you're still seeing like one third of your traffic going to the HTTP version after six months of moving to HTTPS, that something somewhere is still not lined up properly. And it could also just be that the way that you're measuring these things is not set up properly. So I would certainly recommend diving into that and figuring out what is happening here. Why are these redirects not working? Or why might Google be showing the HTTP version of these pages? Or are people going to the HTTP version in some other way? All of those things might be different aspects that you could take a look at there. If a site is blocking redirects and robots text that lead off the site, and those URLs are getting indexed and ranking, could that impact the site negatively from an SEO standpoint? So I think there are multiple angles here that come into play here. On the one hand, if those are redirects that are leading to content that we could otherwise index, then it might be that we're having trouble finding that content and indexing that content. So that definitely would affect the websites from an SEO point of view. However, that applies more to the content that's being redirected to. So if the content that's being redirected to is a different website, then that other website would be affected from blocking those redirects. If you're talking about the website that is doing the redirect and those redirects are going to somewhere else, then in general, that wouldn't be such a big issue in that we see lots of URLs across the web that are blocked by robots text. And we have to deal with that. However, that does mean or that could mean that these URLs which are blocked, if they're being shown in search, they're competing with the other pages on your website. So for example, if you have one page about, I don't know, a specific type of shoe and a redirect that's linked from there going to maybe the affiliate source where people can buy it and that redirect is also being indexed and blocked because of the robots text block, then you're kind of page about the shoe and that redirect that's leading to the store for that shoe, they're kind of competing with each other. So it's not necessarily that we'd say this is a sign of low quality. It's more that, well, you're competing with yourself. We can't tell what that page is that you're linking to. Therefore, we might show both of these URLs in the search results. And depending on how you care about that, it might be that we're showing one that you prefer not to have shown in search. So that's kind of the main aspect there. I wouldn't see it as something where it's an issue from a quality point of view, but more a matter of like you're competing with yourself. Does Google negatively rank any search terms related to marijuana, medical marijuana, CBD oil, et cetera? I'm not aware of anything like that. So in general, the thing to keep in mind is that we can't really negatively rank any search term because if we move all of the results down by 10, then we'd still be starting with the same list. So there's no general way to say, well, we will rank these lower for these queries because then if everything starts a little bit lower, it's still the same list. So I'm not aware of anything specific that we're doing there. I could imagine that we have elements in our search systems that say, well, these might be topics that are particularly critical for people, where people care about finding really strong, good information. So we should be a little bit more careful with regards to what we show there. We shouldn't just randomly show pages that just happened to mention these words as well. So that might be something that plays a role here, but it's definitely not the case that we would just generically demote everything with regards to these queries because like I mentioned, then we essentially just have the same list again. It's not that we would say, well, the first page of search results is empty and essentially wouldn't really change much. In Search Console, under the mobile usability report, there are a number of valid URLs compared to the number of indexed pages indicating how close or how far you are from the mobile first index. So first off, again, mobile usability is completely separate from the mobile first indexing. So a site can or might not be usable from a mobile at a point of view, but it can still contain all of the content that we need for the indexing. So an extreme example, if you take something like a PDF file then on mobile, that will be terrible to navigate. The links will be hard to click. The text will be hard to read, but all of the text is still there, and we could perfectly index that with mobile first indexing. So mobile usability is not the same as mobile first indexing. And I guess the question goes on. If only 30% of my currently indexed URLs are shown as valid in the mobile usability report, does that mean Google Smartphone is having an issue crawling my site to find the remaining URLs? No, no. That can be a little bit confusing with some of the reports in Search Console in that what you're primarily looking at there is to see if there are a significant amount of problems with the pages that we find there, not necessarily to see that the total number of indexed matches the total number shown across all of these reports. So in particular, reports like the mobile usability report, I believe the structured data reports as well, they're based on a significant sample of the pages on your website. They're not meant to be comprehensive, complete lists of all of the pages of your website that are indexed. So if you see 1,000 URLs that are shown as indexed in the index coverage report, that doesn't mean you should see 1,000 URLs across the different reports within Google, within Search Console. It might be that you see, I don't know, 500 or 100 of being shown in the mobile usability report. And as long as those that are shown in the report are all OK, then you should be all set. So it's not something where you need to aim to have the same number across all of these different reports. Hearing me say this now feels like this is a bit confusing. So maybe we should make that a little bit clear in Search Console though. I can see how you might be tempted to say, well, if it says 1,000 here, I need to have 1,000 everywhere. Maybe we need to be a little bit clearer there. That's a good point. We have a website with a Swedish country code top level domain.se. We do business not only in Sweden, but also worldwide. Our business is about outsourcing software development services. Does it make sense, from an SEO point of view, to migrate from a country code top level domain to a generic top level domain like .com? Would it be easier to rank in search for generic queries? Is it possible to rank high in search with a Swedish domain in countries other than Sweden? That's a good question. That's something I hear from time to time. In general, if you have a country code top level domain, your site can still be very relevant globally. So if you're targeting a global audience and you have a country code top level domain, that's perfectly fine. That's not something that you would need to change. However, if you're targeting individual countries outside of your own country, then with your country code top level domain, you wouldn't be able to do that. So for example, if you have a Swedish top level domain and you explicitly want to target users in, say, France, because you have a special offering that works particularly well for users in France, then you wouldn't be able to geotarget for France with a Swedish top level domain. However, if you're saying my services are global and everyone worldwide should be able to take advantage of them, then you wouldn't see any change by going to a global top level domain versus your country code top level domain. The bigger effect that you might see there, though, is more about from a usability point of view, from a user point of view, if they see a Swedish top level domain in the search results and they're not based in Sweden, would they assume that this might not be something for them? That's something you can help to resolve by using clear titles in English, for example, or in whatever language that you're talking to the user about and generally making sure that your site is clearly seen as a global website. So when people go there, they realize, oh, this is actually a global business. They're based in Sweden, which is really cool, but it's not the case that they would refuse my request for service or a product if I had something that I wanted from them. So from that point of view, you don't need to go to a global top level domain if you're targeting a global audience. If you want to target individual countries, then yes, you would need to have something where you can set geo-targeting individually. So that would be something like a .com or .eu even. All of the newer top level domains are also seen as global top level domains. When we have to block content for certain US state IPs due to state laws, is it OK to exclude Googlebot from the restriction? Though it's technically cloaking, we want to be searchable for users from certain US states while not showing content for users from other US states. What's the best way to approach this situation? So yes, that would be cloaking. In particular, Googlebot generally crawls from one location. And if you're looking at a US-based website, we would be crawling from the US. Generally speaking, the more common IP geo databases tend to geo-locate Googlebot to California, which I guess is where the headquarters are. I don't know if all of the Googlebot data centers are actually there, but that's where they tend to be geo-located to. So if you're doing anything special for users in California or in the location where Googlebot is crawling from, then you would need to do that for Googlebot as well. So in particular, if you see Googlebot crawling from California and you need to block users from California from accessing your content, you would need to block Googlebot as well. And that would result in that content not being in it. So that's something that, from a practical point of view, that's kind of the way it is. How you deal with that is ultimately up to you. One approach that I've seen sites do which generally works is to find a way to have content that is allowed across the different states in the US so that you have something generic that can be indexed and accessed by users in California and that we could index as well and show in the search results like that. Or, alternately, if California is in your list of states that are OK to show this content to, then that's less of an issue anyway. The other thing to keep in mind is that we can't control what we would show in search results for individual states. So if, for example, your content were allowed in California and we can crawl and index it there, we would not be able to hide it in, I don't know, say, Alabama or in any of the other states because we wouldn't know that it wouldn't be allowed to show there. So it might make sense to block the Cache page if it's really a problem that people are not allowed to access this content at all. You might need to think about things like a description, maybe using a no snippet meta tag, even in extreme cases, if even just the description might result to that content being shown. All of these things are a lot harder when you have restrictions like this. And the same applies globally as well, where if you have restrictions across different countries, you need to show Googlebot the content that users in those countries would see when Googlebot crawls from those locations. And again, we generally crawl from the US. So anything that is blocked to all users in the US, you would need to block to Googlebot as well and maybe find an alternate way of having content that is generic enough that would be allowed in the US that we could use for indexing. And similarly, you'd need to understand that we still would show that content if we can index it to users worldwide. So if there are any legal restrictions that you have with regards to that content being seen at all in other locations, that's something you might need to deal with and find ways to restrict that properly. Like with a no snippet meta tag, by blocking the cache page in search, whatever methods that you need to do there, it might just be that you need to show an interstitial, which might be a really simple way to deal with that. But all of these legal topics are really tricky from our point of view. We need to make sure that we see the same content as a user from that location. That's kind of our primary requirement. And after that, everything else is more a matter of between you and whatever legal guidelines you need to follow there. That's not something where we could tell you what you explicitly need to do to make sure that this content is compliant. We have a problem with Rocket Chat with the Sign-On button. I can't really help with the single Sign-On button there, so I'm not sure what you need to do there. I would try to contact the general Google Developer folks maybe on GitHub, maybe on Twitter. I'm not quite sure where they would be, the ones that are doing the Google login stuff. OK, let's see. Wow, we just have a few minutes left and a handful of questions. Let me run through them really quickly. And then we can still have a bit of time for any comments or questions from you all as well. Should we wrap our logo in an H1 tag? You can do that. I don't think Googlebot cares any particular way. In general, the H1 is the primary heading on a page, so I try to use that for something useful so that when we see the H1 heading, we know this is really a heading for the page. If for semantic reasons, you just want to mark up an image without any alt text or without any text associated to the image, that's kind of up to you. You can also have multiple H1 elements on a page that's also up to you. That's fine for us. So that's not something where our systems would look down on a page as using an H1 tag improperly, or that doesn't have an H1 tag. I have more than 100 pages on my site and they're not indexed saying that they're deleted. This is something I would go to the Webmaster Help Forum and show some of the screenshots of what you're seeing and mention your URL so that people can take a quick look. There might be something really trivial that you can tweak. It might even be something that you can ignore. Makes it even easier. According to a former question, Google can deal with a single page which has the same schema information twice in microdata and JSON-LD. But what does Google do if the information is different? So this kind of is a general SEO type thing where if you're giving us conflicting information, you shouldn't be surprised if we don't know what to do with that. So if you're giving us information and you want us to do something specific, then make it as obvious as possible what you want us to do and don't frame it in a way that Google should be able to figure out which of these elements is the right one and ignore the other one. It should really be as obvious as possible. So if you're using different structured data elements and they're marked up in different ways and they're kind of conflicting because they're not complete individually, then maybe we'll get it. Maybe we won't get it. I wouldn't rely on us being able to interpret something that's a bit messy. Make it as clear as possible, and then we'll be able to follow it as clearly as possible. Yeah, wow. OK. I think we made it through. What else is on your mind? I'll give you a shot. OK. Happy New Year, John. So just two quick questions. One is actually related to what you mentioned about CCT-LDs. So are you saying that, for example, I have a .RO website for our agency? If I want to target the international market people from the United States or Canada, or I wouldn't need to go to a .com domain or anything like that if I have a CCT-LD. OK. And does the CCT-LD still help you get better visibility in that specific country because that usually used to be the case? Yeah. Yeah. I've actually recently migrated from a CCT-LD to a global domain. And now SEO impressions went up a couple of thousand percent. So it somehow is a big deal for us, but we don't know why just yet. But we are glad we made a change in our case. OK. That's awesome. Cool. That feels a bit more than any normal geotargeting change. But it's awesome to hear, yeah. Yeah, we were quite surprised at the big difference, but we were glad that we made a change in the end. Cool. And one more regarding canonicals. We've noticed for one of our websites that Google doesn't seem to be ignoring our canonicals and choosing the inspected URL as a self-referencing canonical. Just to give you an example, this URL, which is a filtered URL, and it has a canonical to another URL. And Google seems to index both of them and show both of them in search. And I just can't figure out why. One reason I might think that might be happening is that Google Tag Manager inserts an iframe in the head section. So I fear that Google might be not seeing the canonical, although Search Console does report the canonicals. Being there, it just chooses to ignore it. The Google selected canonical is the inspected URL instead of our declared canonical. And it's being shown in the search app. I'm not even sure that your URL is being found by Google. And yeah, I'm not really sure why it's showing up and why it's not being canonicalized. OK. Let me see if I can spot something there. So I think one of the things to keep in mind is also that this site is on mobile-first indexing. So if you're doing something special with the mobile version that might make it a little bit trickier. It's a responsive design, so I don't know if you're doing it. OK. Then that shouldn't matter. It looks like we're just indexing both of these. So yeah, I don't see anything fancy happening there. And it's not the only URL that's the problem. We have a lot of index, but not in sitemap URLs that most of them have correct canonicals. They're just not being, Google kind of doesn't trust our canonicals, something like that. I think in this case, it's probably just also something that is kind of seen as an irrelevant URL, so we don't put a lot of effort into figuring out how to index it. So it's indexed, but it's probably not that visible in search, I would imagine. Yeah, that's our problem that it takes about 30% to 40% of the total clicks or impressions for those specific keywords. So these kind of URLs do seem to be showing up more than usual. If it's just one or two impressions or something like that, that would be fine. But it's showing up more than I think it should show up. And I know that Google sometimes takes a while before it kind of de-indexes it and selects the correct canonical, but that usually, again, means that it doesn't really show up in search anyway. Yeah. I don't know. Well, I'll just post this in the help forums. And if you get any ideas, you can just post that. Yeah. So I mean, looking at that particular URL, it gets almost no impressions. So it might be a matter of a lot of these URLs combined, getting some impressions, but at least individually, that's like one URL. Every now and then, we show it in search, but really rarely. OK. So this is kind of one feedback for Search Console, because these kind of URLs don't show up. There's no message like indexed, Google ignored canonical or something like that in Search Console. So you only have indexed, not in sitemap, and indexed, submitted, not in sitemap. But you don't have indexed, not in sitemap, and Google completely ignored your canonical. So I think that might be something useful in cases where maybe your canonicals are incorrectly set or something like that. Yeah. I don't know. Yeah. Lucy. Interesting. Cool. Also, very quickly, regarding to this, not when it's not about canonicals, it's about rather an extra prep. We also see that in terms of categories, there are oftentimes where for the same site, Google seems to pick page five of the category, page three, when searching for keywords that are in the meta title. So that wouldn't change from one page to another. But Google still seems to pick a deeper page rather than the first page of the category. Is that also something that? That can be normal. That's something where we try to use RelNex real previously, understand that this is a connected set of items. But it's not the case that we would also always just show the first one in the list. So it could certainly be the case that we show number three, number five, or something like that for a ranking. So shouldn't worry too much about it? No, that can be fine, yeah. I mean, it can be a sign that maybe people are searching for something that's not on the first part of your list. So if it's something important that you think you'd like to be found more, then maybe you should find a way to restructure the list to put the important items in the front, make it a little bit easier to have users find them when they go to that list. And accordingly, when we look at that list, usually the first page is highly linked within the website. So we could give it a little bit more weight if it just had that content that we thought was relevant. OK, yeah, that makes sense. Though in this case, it's a keyword that is in the meta title, so it's everywhere. And the first page is linked in the menu, so it should be much more linked than page three or page five. That's why it was a bit odd. Yeah. Cool. Any other questions from you all? I have a question about emojis and SCRP. We're trying to experiment with our SCRP, and I noticed one of our competitors has emojis in the SCRP. So we tried to add it, but it doesn't appear to show up for us. We know that Googlebot's picked up the new meta description, but it's choosing not to show it. What are the metrics, I guess, Googlebot users to figure out whether or not to display the emojis? I don't know. So I think there are two aspects there. We don't always show exactly what is listed in the description or in the title. So that might be playing a role there. With regards to emojis, we also filter some of these out in the search results. So in particular, if we think that it might be misleading or it looks too spammy, too out of place, then we might be filtering that out. So depending on what you're showing and what you're seeing otherwise in the search results, if the same emoji is being shown for other sites, then we could be able to show it for your site as well. It's probably just a matter of us updating the title or description and picking that and actually showing that to users. I think we'll have a play. Thank you. Sure. John, it's Michael. Happy New Year. Hi. Happy New Year. I just want to actually go back to the person who said in the beginning who was kind of nervous about changing their theme, that I've been listening to these Google Hangouts now for two or three years, and we sort of had this fear of doing change. And the suggestions that I heard through these Google Hangouts have made all the difference in a positive way. So that person really shouldn't be entirely nervous. Just look at what other people in your sector or area are doing. Think about keeping it as clean as possible. Go by the less is more theory of fewer ads or fewer bells and whistles. And then you'll probably be less afraid about how it will affect you, probably see good results. So I just want to lay their fears. Cool. Thank you. All right, let's see. There's one more in the chat here about Search Console Inspection Tool and the Info Operator. Are both methods suitable to assess the canonical URL? Will they sometimes return different results? Is there a delay? For a large part, the Info Operator does show the canonical URL, but that's a little bit less by design and more accidentally. So if you want to see the current canonical that we pick for URL, you should really use the URL Inspection Tool. That's something where I know the Inspection Tool is specifically designed for that, and the Info Tool could be a bit tricky there in that it's more something that we try to use for a general audience, so it's not really a technical SEO tool, per se. So that could also change over time. So if you need to figure out what the current canonical version is, definitely use the Inspection Tool, not just rely on Info Operator or the cache page or site query, anything like that. All right. Let's take a break here. It's been great having you all here. I think we had a good run last year and looking forward to another good run this year, and hope to see you all again in one of the future hangouts again. Thanks for dropping by. Thank you once again. Thank you. Bye, everyone.