 All right. Welcome, everyone, to today's Webmaster Central Office Hours Hangout. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hours Hangouts where webmasters and publishers come together and we can discuss topics all around web search and websites. As always, a bunch of questions were submitted already. We don't have a lot of you here live at the moment. I just wanted to talk about one thing briefly before we move on to the questions. As you've probably seen, we did a blog post about Search Console, some of the changes that are coming up in Search Console. Last year, we started moving to a new platform in Search Console. And for us, for the Search Console team, this is something that they've been working on for quite some time. And it's something that we have to get complete more or less towards the end of year at latest. So our goal with the blog post, with some of the changes coming up, is to make sure that we inform you all as early as possible so that when things change or go away in ways that we think might affect you, we want to let you know about that as early as possible. Change, I think, is always a bit of a hassle, especially when you have processes that work fairly well and someone goes off and changes the tools or changes the data that's provided in the tools that's always a bit frustrating. So some of these changes we can't avoid. Some of them we think we should have done way in the beginning when we started with Search Console, knowing now, if we had known then what we know now, something like the canonical change that we announced this week. So I realize sometimes this is a bit frustrating, but we hope we will have a fairly nice path going forward. And we want to really inform you early and let you try things out early so that you're not too surprised when things do change. We do also have a lot of really neat stuff lined up. And by shifting to a new platform and removing some of the old features, a team has a lot more time to actually move forward and create new and fancy stuff. So what you can do there, if you feel strongly about certain things that are either going away or that are missing or that you'd like to see in a new tool, make sure to use the feedback feature in Search Console. And don't just go in there and say, I really want this, but rather give us some information about what you'd like to see from that. What are you trying to achieve by having this new feature or having the same thing as we had in the old one in the new one? Because giving us a little bit more information there helps us to figure out, how do we need to prioritize this? Is this something that we may be missed, something that we should have thought about early? Is this something that maybe we can provide a better way to give you that information or to help you do that thing than we had in the old tool? So make sure to go into the feedback tool, send us information, give us feedback on what you think you'd like to see differently. Some of these things we'll be able to do. Some of them might take a little bit longer because we really need to first clean out all of these old things that we've collected over the years and move everything over to the platform. So for some of that, I'd love a little bit of patience, but it's also fine to let us know vocally if there's something that you feel really strongly about. So don't be too shy. All right, so with that, I guess we can go to questions from you all to start with. If there is anything on your mind that you'd like to talk about before we move on to the submitted questions. Hi, John. Hi. Recently, we faced a very interesting issue. We found one of our client websites, the way they build the website, there is no paragraph tag. All are h1, h2, h3, h4 tag. So they use heading 4 tag instead of p tag. They use sometimes, but very little amount of tag. So like their main content of the website, they use heading 4 tag for the main content of the website. So does it have any negative impact on their ranking? Because there is no paragraph tag. All are heading 4 tag or heading 2, heading 3, heading 1 tag. I don't see a big problem with that. I mean, obviously, like you, since you noticed it, it's probably something that would make sense to clean up. But it's not that we would say this is a negative effect. But rather, what you're doing there by saying everything is important, you're telling us it's all the same importance. So it's like everything is important. Therefore, nothing is particularly important. So we have trouble kind of understanding the context within the page. So that's why mostly I would clean that up to make it more that we can figure out which parts are really important and which parts are kind of the normal text. And that way, we can understand these pages a little bit better. I don't know if you would see a direct ranking change because of fixing that. But it makes it easier for us to figure out what the page is really about. At this point, the ranking is pretty much good. But we are expecting that if it cause any issue in future, that is a musty. Now, I don't think this would be something that we would see a spam or something that we'd see is problematic. It's really just you're not giving us as much information as you could be giving us by telling us this is really important and this is kind of normal content. Thank you. All right. Any other questions before we get into the submitted ones? Nothing? OK. Cool. That's fine too. Maybe things will come up during the course of the Hangout and we can get into those. All right. The first one is about Search Console. We keep getting random broken links in Search Console. I wonder what we should be doing there. Redirecting them or leaving them as they are. So I don't know what random broken links you're seeing there. That might be something to maybe post in the forum to get some advice as well. But in general, if you see a link pointing at your website that doesn't work at all, then it's fine to return 404 for a URL that doesn't exist. That's kind of what the 404 status code is for. And that's something that our systems work well with. So if there's a URL that never existed, return 404. That's perfectly fine. On the other hand, if you see that there are links coming to your website that are pointing at URLs where you can guess what they meant and maybe they just have a typo or an extra dot at the end or something like that, then those might make sense to redirect, especially when you're seeing people going through those links, because that seems like something where someone tried to recommend your website, but they didn't get it perfectly right. So you might make sense to redirect those to the correct page instead. I think for both of these situations, that's something where you can also look a little bit at the traffic through those URLs. If a lot of people go to those URLs, then somehow that's encouraging, because people are wanting to go to your web pages, then it might make sense to figure out a way to determine what was meant with this link and where could I point it? Where could I redirect people to? What factors might cause a piece of content that's been syndicated on a partner site to rank well? This is despite the fact that the canonical is set to the original content on my site and has been there for several months. Is there a matter of site or niche or authority? What are we doing there? I think this is always a tricky situation. We do try to figure out which page is the most relevant for some of these queries and to point users directly there. But if these are completely separate websites and they're just posting the same article, then there's also a lot of additional value from the rest of the website. And that could be information on that specific page. It could be additional value that the rest of the website brings, where when someone goes to that one article, maybe they go off and look at other things on that website. Because otherwise, that website is also very nice. So that's something that can always happen. And if you're syndicating content, that's something you need to take into account, that it might happen that the content that you syndicated to some other website ends up ranking above your content. That's not always completely avoidable. So those are kind of trade-offs that you have to look at there. I think the canonical is a good way to let us know that these two pages belong together. But it's also the case that a canonical isn't really correct in a case like this, because the pages themselves might be completely different. It might be that there is this block of text that's the same across both of these pages. But it might be that there's a lot of other content around that page that is completely different. That could be user comments. That could be kind of the rest of the website itself. So again, that's kind of a trade-off that you have to look into. It makes sense to bring the information out to a broader audience by syndicating the content. But on the other hand, you have to take into account that maybe these other websites will rank above your website when it comes to search for that specific piece of content. Google is reporting our expired product pages as soft 404, these URLs redirect to relevant alternate product with a message saying that the product they wanted is unavailable. Is the redirect causing a soft 404 or the content of the redirect page? So I suspect what is happening here is that our algorithms are looking at these pages and they're seeing that there's maybe a banner on this page saying this product is no longer available. And they assume that that applies to the page that the user ended up on. So that's sometimes not really avoidable there. If you're really replacing one product with another, it might make sense to just redirect. If one product is gone legally and no longer available, then I would ask that in kind of the soft 404 state where you say this product is no longer available. Does Google consider a backlink in rankings of videos in the video carousel on Google? I think we looked at this question last time already, so I'll skip this here. How long does it take for Google to recognize hreflang tags? Is it possible that Google first indexes from Switzerland and shows a CH version of a website under the German TLD? So we don't actually index content first from Switzerland. Our crawlers and most of our systems are more located in the US rather than in Switzerland. So I don't think we would prioritize Swiss content over other content. But what happens with hreflang links in general is kind of a multi-step process. First, we have to crawl and index those different versions of the page. Then we have to index them with the same URL that you specify within the hreflang markup. And then we need to be able to follow that hreflang markup between those different versions of the page. And to do that, we need to have that confirmation back as well. So it's something that does take a little bit longer than just normal crawling and indexing. We have to kind of understand the net between these different pages that are all supposed to be part of this set of hreflang pages. So that's something where it's probably normal for us to take, I don't know, maybe two to three times longer than we would to just crawl and index an individual page so that we can understand the link between the hreflang versions. And again, there is no preference for Switzerland over other countries in Europe. I think that would be nice from just for me personally, from an kind of egoistic point of view, but it wouldn't make sense in the kind of global area in general. We do try to treat all websites the same. So just because a website has a CH version doesn't mean that would automatically rank above a German version. So the other thing with hreflang is, for the most part, it doesn't change rankings. It just pops out the URLs. So that should be, I think that's kind of known. Let's see, I recently changed my site's domain from this one to another one. 301 redirects are in place. Change of address has been initiated. I am still seeing the old URLs and index. It's been over three weeks. Is that normal? I had some issues with 301s not being active for a week after the migration happened, but they're active now. For some queries, both the old and the new site show up in the search results. What could I be doing differently here? So the 301 redirect is really what you should be watching out for. It's important for us that the 301 redirect is on a per-page basis, so all of the old pages redirect to the same page on the new website. We have all of that covered in our information in the Help Center for site moves. So I would double check that and kind of go through step by step and URL by URL even to see that this is really working the way that it should be working. The other thing to keep in mind is we crawl and index URLs individually. So we don't crawl the whole website at once and then switch things over. We do that step by step, and some of these pages are crawled and indexed very quickly within a couple of hours. Some of them take a lot longer to be re-crawled and re-indexed, and that could be several months. So that's something that could be playing a role here as well, where maybe we just haven't had a chance to crawl and index and process the redirect for all of these pages. So there's still some that we've only seen on the old website and some that we've already seen on the new one. That could be playing a role here too, especially if you're looking at a period of three or four weeks. That would be kind of normal. And finally, what also plays a little bit into this is something that SEOs and webmasters find really confusing, in that even after we've processed the redirect, if someone explicitly looks for the old URL, we'll show them the old URL. So that's a little bit confusing. Our systems are trying to be helpful here and say, well, we know this old URL used to exist, and we have the new content here, but we'll show it to you because that's specifically what you're looking for. So for example, if you do a site query for the old URL, even maybe a year after doing a site move, we can still show you some of the URLs from your old website with a site query, even though we've already processed the redirect for those URLs. So when you change your site name, for example, you'll see within the site query, you'll see the old URLs with the new site name that's mentioned there. And from our point of view, that's working as intended. We're trying to help the average user who is looking for a URL for a webmaster who just did a site move, that is a bit confusing. So I don't know if that's something that we will be changing, but in general, I think that kind of makes sense. John, regarding migrations and search console, I noticed that even if there's a site move or there's no site move, I just redirect all there. When you're looking into the old domain, the domain that is now redirected to a new domain, I noticed in one case that errors start popping up. And I looked at the errors. It said submitted, but not index. It had an oindex tag or something like that. But when I entered the actual URL, it did redirect me to the new domain, which indeed had an oindex tag. So it looks like when you're using the URL inspection on the domain that redirects, it returns the results for the target URL that it redirects to. Is that intended or is it is a bit confusing because you're thinking, oh, my god, the redirect is not there anymore? Why isn't it saying it's redirecting instead of? Because Fetch as Google used to just tell you, well, this is just redirecting. That's it. Now your inspection is telling you that it's no index or something like that. If it's. OK, so if you use the live test, I imagine? Yes. Yeah, OK. The live test and the just normal. I mean, the index test, I think, makes sense because we see the final version as kind of the state that we would index. So I think that kind of makes sense. But with the live test, that would be a bit confusing. So I don't know. I need to figure out what is happening there. That seems like something where we should be showing a redirect status rather than just blindly showing the final state. So you mean the basic URL inspection just entering the URL and submitting it, it should show the final version, the results based on the target redirect. But when I hit the this live URL, it should show me it's redirected. Is that? I don't know if that's what it does at the moment. I think that's how it would make sense so that you can actually debug these kind of redirect situations. I do wonder now, though, if it's specific to the noindex that we follow the redirect and we say, oh, noindex is worse than telling you a redirect is there. But I don't know. I'll double check to see what is happening there and follow up with the team. Yeah, this is one of the URLs that's doing that. It is redirecting. And I just tested it now both with the URL inspection and the live test. And it just shows me there's a noindex tag because there is noindex tag on the final. OK. Cool. That sounds interesting. All right. How does Google Search bot view website personalization? We have a new product layer that the website allows personalization based on industry, location, even to a single company. This allows us to really kind of adjust the content. I think we looked at this one last time as well. But just to kind of give a quick answer here, the important part is Googlebot mostly crawls from the US. So if you serve different content to different countries, Googlebot would probably only see the US version of the content. We would not be able to index different versions of the content for different locations. So if there's something that you want to have indexed, make sure it's in the generic part of your website so that Googlebot is sure to be able to pick that up. You're welcome to use personalization to kind of add additional information all across the page. But if you want something to be indexed, it should be in the part of the page that is not tied to personalization. I'm wondering how much a very low performance rate in web.dev is affecting Google ranking of a website? I don't know. So web.dev is a really cool tool that pulls together the different tests that we have in Lighthouse and gives you scores on those and kind of guides you through the process of improving those scores. So when it comes to speed, things that you need to watch out for, things that you could try. And over time, it tracks the progress of your website kind of as you're going through the different content on the tool there. So that's something that I think is generally a good practice to go through and to kind of work through. But it's also something that these are good practices to follow, but it doesn't mean that they will automatically result in a higher ranking. Or similarly, if you have kind of low scores here, it doesn't mean that your website is ranking terribly because it's not following the best practices. So there is kind of an aspect of, on the one hand, if your website is so bad that we can't index it properly at all, which might be the case with a really low SEO score in Lighthouse where we can't access the URLs. Or there are no URLs on this page and it's just a JavaScript shell that we can't process the JavaScript for. That could have a severe effect on your SEO. But on the other hand, if it's just a matter of your site being a little bit slow or not being perfectly optimized, I don't know if that would cause a significant effect on your website. So my recommendation here is to look at the advice that's given in tools like web.dev. Think about what you can implement. Think about the parts that you think are important for your website. On the one hand, for search engines, of course. If you're asking this here, on the other hand, also for users, because ultimately, if you're doing something that improves things for your users, then that will have a kind of a long-term trickle-down effect on the rest of your website as well. The URL inspection tool and search console has been giving us different info concerning our URLs in the user's declared canonical field. One day it's not available. The next day it shows a given canonical. Can you give us some insight into why this is happening? Is there a difference between user-declared canonical not available and user-declared canonical none? We've seen both appear in the tool for URLs that have canonical tags on them. I don't know. So some amount of fluctuation can happen as data is being reprocessed. So that might be happening, especially if a URL is completely new. It might be that we just haven't processed the canonical yet and that we'll get to it there. But this isn't something where you should be seeing things fluctuate back and forth. So usually it's more that, oh, we don't know yet. And then the next state is, oh, we do know, because we've been able to process it properly. But it shouldn't be going back and forth. If you are seeing that it's going back and forth, it would be really useful for us to have some sample URLs and maybe some screenshots so that we can take a look at that with the team here to figure out what exactly is happening here. With canonicalization, sometimes it's a bit tricky because it can change over time as well. So as much information as you can give us there really helps us to figure out what we can do to improve that, to make it more useful. Can Google decide whether or not to show information of the structured data organization? So I don't know what you mean with structured data organization, but in general, we do have algorithms that try to figure out why it makes sense to show structured data as rich results in the search results. And when we feel that maybe it doesn't make sense, or when we feel that maybe we're not 100% sure about this website or about the way that the structured data is implemented on this website, and we'll be a little bit more cautious. So that is something where if you provide valid structured data markup, it's not a guarantee that it will always be shown exactly like that in the search results. Mm. I have a problem with a long-term client of mine. I think it's a legal website. I took a quick look at that before the office hours. And from my point of view, it does look a little bit weird. So I passed that on to the team here to kind of review, to double check that everything is working as expected. I imagine it's always a bit tricky with websites like that to figure out how they should be handled in search. But I passed this one on to the team to double check. Question regarding the website structure and multilingual, multi-regional configuration, should I worry about the URL parameters configuration when splitting the domains by folders to separate services in Search Console? So I think on the one hand, I think it's good that you're looking into these kind of issues. On the other hand, I am kind of worried that you would have different configurations for a website by subdirectory because that sounds like maybe you're not doing something that clean with the URL parameters in general across the website. So that's something where I don't know the website here specifically, so it's really hard to say. But it sounds like you have different parameters that mean different things or that can be ignored or that shouldn't be ignored, depending on the individual kind of subdirectories within your website. And on the one hand, that should be possible. We should be able to deal with that. On the other hand, if there are situations where we can completely ignore individual URL parameters and in other cases where these exact same parameters are critical for the content, then that feels like something where our algorithms could get confused and say, well, we always need to keep these parameters or we never need to keep these parameters and then suddenly parts of your content is missing or parts of your content is indexed multiple times. So using the URL parameter tool definitely helps us in a case like this. But it feels to me like it would probably make more sense to try to clean up these URL parameters in general and find a way to have a consistent structure for your website's URLs so that algorithms don't have to guess or that algorithms don't have to figure out, oh, in this particular path, this parameter is important. And in these other paths, we can ignore it. Any time that you're adding so much additional complexity to the crawling and indexing of a website, you kind of run into the situation where maybe things will go wrong. So the easier you can keep it, the cleaner you can keep it, the simpler you can keep it within your URL structure, the more likely we'll be able to crawl and index that site without having to think twice. And as always, there are other search engines out there. They don't have the URL configuration tool, the data that we have there. So they wouldn't be able to see that. And you might be causing problems on those other search engines. Or perhaps it also plays a role with how your content is shared on social media. So all of these things kind of come into play here. So my general recommendation here would be not to spend too much time trying to fine-tune the URL parameter handling tool for all of these different sub-directories, but rather to take that time and to invest it into thinking about what you would like to have as a URL structure for the long run and thinking about the steps that you would need to get to that cleaner URL structure. I'm hoping to get some insight on an issue that I posted related to content being de-indexed as duplicates, submitted URL, not selected as canonical. I also took a look at this one before the Hangout. And it also looks a little bit weird. So I'm trying to see what we can work out here with regards to those URLs. I saw that you provided some sample URLs. That's really, really helpful. That's what I took and passed on to the team here to try to figure out what we can do there to make that process work a little bit better. So in general, what I think is happening here is for whatever reasons our algorithms believe that these pages are equivalent and that we can fold them together. And because of that, we pick a canonical from one of these pages. And it looks like looking at the pages manually in a browser that they're actually quite different pages. So folding them together would not make sense. Therefore, selecting a canonical from this set would also not make sense. One of the things that I've seen in the past that have led to something like this is when we can't render the content properly, when we can't actually access the content properly, when we basically see an empty page, then we see, oh, well, this is the same as the other empty page that we saw. Maybe we can fold them together. So offhand, that's kind of the direction I would take there to think about how is Google thinking that these pages are equivalent? Is it possible that maybe in the mobile friendly test, like they don't have actual content? Could it be that I'm showing an interstitial to Googlebot accidentally and only that interstitial is being indexed? What might be happening here? I didn't have a chance to look into it in that much detail here yet, so it might be that something like this is happening on your side. It might be that something weird is happening on our side and we need to fix that. But that's kind of the direction I would take in a case like this. I want to navigate my customers a little bit better on my website. I want to make sure that this wouldn't confuse Google. I think it kind of goes on into I like to set up my URL structure to have a domain and then category and then product in a path, or maybe I set it up differently, what should I do? Which URL structure should I pick? From our point of view, you can use any URL structure. So if you're using a kind of a path sub directory structure, that's perfectly fine. It's important for us that we don't run off into infinite spaces. So if you use URL rewriting on your server, that it's not the case that you can just add the last item in your URL and just continue adding that multiple times. And it always shows the same content. But it should be a clean URL structure where we can crawl from one URL to the other without getting lost in infinite spaces along the way. You can use URL parameters if you want. But if you do decide to use URL parameters, like I mentioned in one of the previous questions, try to keep it within a reasonable bound so that we don't, again, run off into infinite spaces where lots of URLs lead to the same content. But whether or not you put the product first or the category first, or you use an ID for the category or write out the category as text, that's totally up to you. And that doesn't have to be aligned between your e-commerce site and your blog. That can be completely different on both of these. So I think it's good to look at this. But on the other hand, I wouldn't lose too much sleep over this and rather just find a URL structure that works for you in the long run, in particular one that you don't think you need to change in the future. So try not to get too narrow down and pick something that works for you, works for your website. Last Hangout, we discussed the carbine tips and another website to see if they had related problems because you're also having trouble ranking there. I took a look at this one before the Hangout. And I don't think there's anything related between those two. I also passed that on to a team to take a quick look at to see if there's something maybe otherwise stuck that we can help to get unstuck. But otherwise, they're completely separate websites and treated individually. We're developing an application for Angular Universal. With some sections, we want to change the appearance of the URLs in the browser, but keep it the same on the server side. So for the server, it would be luxury goods, leather bags, but the user would see just leather bags. Is there any problem with this in Angular Universal using dynamic rendering? So just from a practical point of view, Googlebot doesn't care what you do on the server. You can track that however you want on your server. The important part for us is that we have separate URLs for separate pages that we have links that Googlebot can click on that are kind of A elements with an href pointing to a URL that we can follow and that we can access these URLs without any history associated with them. So if we take one URL from your website, we can copy and paste it into an incognito browser and it should be able to load that content. And if it loads that content, if you have proper links between those pages, then from our point of view, how you handle that on your server is totally up to you. Using Angular Universal with dynamic rendering or if you have something of your own that you set up, that's all totally up to you. That's not something that we would care about. It's not something we would even see because we see the HTML that you serve us and the URLs that you serve us. My website fetches individual web pages, which are not interlinked through an API, but no links are displayed through clicks. It has a search box where every individual page shows search results is Google is successfully crawling all links as valid via sitemap. Does Google see this as a valid practice because links and millions will harm rankings or freeze ranking? So there are lots of aspects in this question where I'd say this sounds kind of iffy. There is some things that sound kind of OK. So if Google is already indexing these pages, then something is working out right. In general, I'd be careful to avoid setting up a situation where normal website navigation doesn't work. So we should be able to crawl from one URL to any other URL on your website just by following the links on the page. If that's not possible, then we lose a lot of context. So if we're only seeing these URLs through a sitemap file, then we don't really know how these URLs are related to each other. And it makes it really hard for us to be able to understand how relevant is this piece of content in the context of your website, in the context of the whole web. So that's one thing to kind of watch out for. The other thing to watch out for, I think, is if you're talking about millions or pages that you're generating through an API with a search box and just submitting those via sitemap file, I'd be kind of cautious there with regards to the quality of the content that you're providing there. So in particular, if you have product feeds, if you're using RSS feeds to generate these pages, if you're doing anything to automatically pull content from other websites or from other sources and just kind of republishing that on your site, then that's something where I could imagine our quality algorithms maybe not being so happy with that. And similarly, if this is all really completely republished from other websites, I could imagine the web spend team taking a look at that as well and saying, well, why should we even index any of this content? Because we already have all of the content indexed from the original sources. Like, what is the value that your website is providing that the rest of the web is not providing? So that's one thing to kind of watch out for. I don't want to kind of suggest that your website is spammy. I haven't seen your website. But it is something that we do see a lot. And it's something where as a developer, you go, oh, I have all of these sources. And I can create code. Therefore, I can combine all of these sources and create HTML pages. And I'll have a really large website without doing a lot of work. And that's really tempting. And lots of people do that. Lots of people also buy frameworks that do this for you. But it doesn't mean that you're creating a good website. It doesn't mean that you're creating something that Google will look at and say, oh, this is just what we've been waiting for. We will index it and rank it number one for all of these queries. So it might mean that it looks like very little work in the beginning because you can just combine all of these things. But in the end, you spend all this time working on your website when actually you're not providing anything of value and then you end up starting over again trying to create something new again. So it looks tempting to save a lot of work in the beginning, but in the long run, you basically lose that time. So it might make more sense to figure out, how can you provide significant value of your own on your website in a way that isn't available from other sites? We're facing an issue where lots of resources couldn't load due to the same page not getting rendered in the snapshot for Googlebot. While debugging these issues, we couldn't find a solution and Google is marking them as other error. What could that be? So this is also a fairly common question. What is essentially happening here is we're making a trade-off between a testing tool and the actual indexing. And within the testing tool, we try to get information as quickly as possible directly from your server. But at the same time, we also want to give you an answer fairly reasonably quickly so that you can see what is happening. And what tends to happen here is if you have a lot of resources on your pages that need to be loaded in order for your page to load, then it could happen that our system is essentially time out. And we try to fetch all of these embedded resources, but we don't have enough time because we want to provide an answer to you as quickly as possible. So you end up seeing these embedded resources not being pulled, and you see an error in the live rendering of the page like that. When it comes to indexing, our systems are quite a bit more complex, though. We cache a lot of these resources. So if we try to index an HTML page, we'll know all of these CSS files we've seen before. We can just pull them out of our cache. We don't have to fetch them again. We can render those pages normally, and that just works. One thing you can do, or maybe there are two things you can do to help improve that for the testing tool and for users in general, on the one hand, you can reduce the number of embedded resources that are required on your pages. So instead of having 100 CSS files, you throw them into a tool, you create one CSS file out of that. That's one thing that you can do. That makes sense for both users and for search engines. You can do that for JavaScript as well. You can minify the JavaScript. You can combine things and make packages rather than individual files. I think that's a good approach. The other thing is if you're seeing this happening for your pages and you don't have a lot of embedded content, then that's kind of a hint that your server is a bit slow and that we can't fetch enough content from your server to actually make this work. So that might be a chance to look at your server, your network connectivity, and to think about what you can do to make that a little bit faster, so that these tools don't time out, so that it's also faster for users as well. So in both of these cases, the net effect is that users will mostly be seeing the speed improvement. But the side effect will also be that you'll be able to use these tools a little bit better because they tend not to time out as much. OK, thanks, John. That's my question. And for that, I think most of the pages are just happening for almost all the pages. And one of the things that we are observing in those resources itself, those 60% are the images. That's something that we are required to kind of load to show to Google as well as for the users. And we are not able to figure out why is it happening. We are using CDNs and the kind of better servers for those specific resources. But still, we couldn't understand why is it still happening. So they are kind of, it's something, it's not something. Our friend is loading. We're doing lozzy learning for those images, which are not coming the first four. But still, we're not able to understand why I meant for each URL with each resource that is happening. So that's something we're not able to debug. Yeah. So what I would do there is try to use some other tools to figure out, is this really a problem on your side somehow that things are a little bit slow? Or is this something just on Google's side that we tend not to have as much time to fetch all of these individual resources? So what you could do is use the Chrome developer tools. What is it, the network tab that you have there to figure out how many of these resources are being loaded, how long does it take? You can use webpagetest.org. It also creates a kind of a waterfall diagram for your content, also listing the time that it takes for those test URLs and the size of the resources that were to return. And by using those two, you can kind of figure out, is it the case that it just takes 20 seconds to load my page with all of the embedded content, with all of the high-resolution images? Or is it the case that these testing tools, say, my page loads in three or four seconds with all of the embedded content? Therefore, it's probably more an issue on Google's side. I don't have to worry about it. OK. It sounds like something that we should have a little bit more information on in the Inspectorial Toolbow so that you can understand the difference between you have to fix something or Google has to kind of find more time to fetch things. Hello, is this working? Yes. Oh. John, first of all, I want to say a very good early morning from Pittsburgh, Pennsylvania. Oh, wow. It must be middle of the night. That's not morning. Hey, my client's website was one of the strange websites. And I've never used this before, so I didn't know how to chime in. Is there a means through which I would be able to get any type of feedback after your team has checked it out? Ideally, if you have a forum thread in the Webmaster Help Forum, we can post something there. In most of the cases where we escalate things internally, it's not a case where we would be able to give you any kind of additional information back, unless there is really something that you need to change on your side. Sometimes it's just a matter of, oh, we need to fix this on our end, and we'll try to get that as quickly as possible. So it's not always the case that we have something explicit that we can bring back to you. All right, thank you very much. All right. Any other questions from any of you? John, a really quick one. So this is from both me and Ramon. So we're working on building this new website on this other domain, and we wanted to make sure that there's no connection to the existing one just to rule out any issues further down the road. And that one basically has a very, I don't know if it's a 404 page or it's a 201, but it doesn't have any content yet. What worried us a bit was that when we did a few weeks ago, we did an info query. It actually showed the current site like the main. And we noticed this was probably because the non-dab-dab-dab version was still redirecting to the domain while the dab-dab version was accessible. So we removed the redirect, and it seems like the info query shows the correct domain now. So I was just wondering if there's anything we might need to check in order to make sure there are completely separate from Google folding anything together? Offhand, it looks like that landing page is kind of indexed normally. So I think that's looking pretty normal. You're not seeing any connections to the current site. I don't see anything there. I mean, what I do see is there are some search pages that maybe used to be indexed, but yeah. Yeah. Well, as long as there's nothing related to the current site. I don't see anything that would be like a red flag. Cool. We don't even have a red flag anymore. We have a special color flag. It seems like everyone here today has special flags. And I don't know. I mean, on the one hand, it's awkward to have some of the special cases come up because it always feels like maybe there's something broken on our side. On the other hand, it's better to know about these cases and to figure out what we can do to improve them and to know what we can do to improve the communications around these kind of cases. Maybe there are things that we can highlight in Search Console, for example, or send out messages on Search Console if there's something that people could be doing differently. So I think it's always useful to see what all is happening. Thanks. Cool. Hey, John, might I ask one more question? Sure. I've noticed as a result of the last several updates that have been coined the Medic update, I've seen some websites that no longer show on the first page of search results for their own company, for their own brand. And I was wondering, why in general would that be? What is Google not liking that other websites that reference websites are on the first page, whereas the actual website itself is not on the first page? I think that's always a bit tricky. There usually are two aspects that are involved there. One is more an issue, especially with newer websites where the website name or the company name is more like a generic query. So if, for example, the website's name is Best Lawyers in Pittsburgh or something like that, then on the one hand, that might be the company name. On the other hand, if someone were to type that into Search, we would probably assume that they're not looking for that specific company, but rather they're looking for information for that query. So especially with newer companies, that's something that we see every now and then. We see that in the forums where people are saying, oh, I'm not ranking for my domain name. And then their domain name is something like, I don't know, BestVPNProviders.com. It's like, well, it's a domain name, but it doesn't mean that you will rank for that query. So that's one thing. When it comes to sites that are a little bit more established that are out there already, usually it's more a sign that we just don't really trust that website as much anymore. So that's something where we might recognize that actually people are searching for this company, but we feel maybe the company website itself is not the most relevant result here. Maybe we feel that there is kind of auxiliary information about that company, which is more important that users see, first, which could result in something like this happening. Usually that's more a matter of things kind of shuffling around on the first one or two pages in the search results. It would be really rare that something like that would result in a website not showing up at all in the first couple pages of the search results. And I think that's kind of what you highlighted there with your question. And that's something where I think, well, even if we didn't trust this website as much anymore, then we should at least have it somewhere in the search results, because if we can tell that someone is explicitly looking for that website, it would be this service for the user to not show it at all. Like for maybe for generic queries, one could argue maybe it's not the perfect result. But if we can tell that they're really looking for that website, at least we should give the user a chance to see that website as well. So that's kind of why I took that and said, well, maybe we should be catching this a little bit better. I don't know if our algorithms are correctly understanding how trustworthy your website there would be. I don't know the website. So that's really hard for me to judge. But even if we think that it wasn't trustworthy at all, maybe we should still show it somewhere in the search results on the first page. Thank you. Yeah, over the last several algorithm updates, for some reason, have been getting the idea that Google no longer trusts the website. And I've been trying to figure out why that is. But I agree that clearly, if the business has a unique name and people are looking for that business, then that's kind of what they should see. So thank you. The other thing to maybe look at, if you haven't done so already, is look at the quality radar guidelines that we have. There's a lot of information about trustworthyness in there. Some of that is worth taking with a grain of salt. It's not that we take that one to one and use that as a ranking factor. But there are a lot of ideas in there, especially when you're talking about a topic that's kind of a legal website or a medical website, then it does make sense to show users why are you providing this information? Why should they trust you? All right, so let's take a break here. It's been great having you all here. Thanks for all of the questions and the comments along the way. Like I mentioned, I sent some of these sites off to get reviewed. I don't know if there's anything specific that we would get back. Some of you also have forum threads. So if there's something that I get back where people are saying, well, this website should be doing this differently or should be doing that differently, then I'll try to get back to you in the forum thread there on that as well. All right, then with that, I wish you all a great Friday and a great weekend. Hopefully things are nice on your side of the world and we'll see each other again, maybe in one of the future Hangouts. Bye, everyone. Thank you, John. Bye.