 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangout. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hour Hangouts, where folks can join in and ask any question around their website and the web search. And we'll try to find an answer. A whole bunch of things were submitted already. But if any of you want to get started with the first question, feel free to jump on in. Hi, John. Hi. I have a few questions, very small, actually. The first question is about Altatech, Altatech for the image. So if I have a web page and I have a lot of images on the web page, if some of the images does not have the Altatech, will it affect my organic ranking of the web page, or will it only affect my image ranking? We treat the text in an Alt attribute as both a part of the page, so kind of for web search for that page, as well as more information about the image itself for image search. So if the text in your Alt attribute is something that's completely unique to the whole page, then we wouldn't have that. If you don't otherwise kind of give us that information. But usually, if it's an image on a page and the image has a little bit more context to the rest of the page, then that's just extra information. It's not something that's critical for web search. And the next question is, some of our clients, they have some tagline or slogan of their business. They put it on every pages. Usually, they put it above the footer. So that takes appear on every pages. Now, sometimes those sentence has the keyword. If we use the keyword for interlinking, is it bad? Because that text is appearing on every page. I don't think that's a problem. Like, that's a completely normal kind of website structure. So I don't see an issue with that. OK. The last question, this is also about the image. So if I have a blog post, it is about the best three five places. And I have added a feature image, which is the five place image. What should be the Alt attribute? Should it be the three best five places or just five places? The Alt attribute should be an alternate text for the image itself. So it's not like a heading on a page or anything like that. But it should be kind of describing the image itself. Thank you. Sure. I'll go next. Hi. All right. Hi. So I inherited a blog post from my company. And the template on Squarespace makes every single H1, like the site title. So there's like 500 blog posts with identical H1s. And then the site, the blog post title, are H2s. Is this something I should address with them? Because I'd have to ask them to change the template. And it takes a bit of work. So is that really bad that everyone has signed? OK. It's not terrible. So the H1 heading gives us a little bit more information to kind of understand the page better. But it's really common for sites to have kind of this setup where you have the same H1 heading on all pages. It's not the best setup. So it's something that at some point it might make sense to look into. If you're talking with them about the template anyway, then this is something that's probably worth fixing. But it's probably not going to visibly affect your site's ranking. So it's kind of like a best practice that's good to do, but not something that is critical for a site. OK. And if I put an H1 text in the body, will that help? That works, too. Yeah. That helps us, too. So it's not that there's a limit of one H1 tag per page. So if we find multiple H1 tags, we'll make do with that. And does it matter which one will kind of float to the top, the one I put in or the templated one? Not really, no. OK. All right. Thank you. Sure. All right. Any other questions before we jump into the submitted ones? OK. Let's see. I think your question is actually the first one on the list, the identical H1s. So we can skip that one. The next one is about Hreflang. We're an international brand, and we have five sites in Latin America. All have pretty similar content, and we do use Hreflang tags. We have another site in the US that serves Spanish customers. What we recently noticed is the fact that for the brand name in Peru, the Peru homepage no longer ranks, and the US Spanish version does. This only happens in Peru. And only for the brand name, for the product pages, it works fine. What could be the cause? Having users land on a different market interferes with the brand experience that we wanted to give visitors. So I think looking at the last aspect there with regards to it's like sometimes people get to the wrong page, that's something you can never completely exclude. So it's always possible that through some means, be it through search or be it through clicking on links, people will land on the wrong version of your site. So you need to have some kind of a backup on your site, especially if you strongly care about the differences between the individual regions or language versions that you have. So usually what I recommend is if you can recognize that someone is on the wrong version of your site, then show a small banner on top and say, hey, it looks like you're from this country or you're looking for this language content. We have this content here. That makes it possible for everyone to go to the optimal version of the site, but at the same time allow Googlebot to crawl your website normally because it doesn't automatically get redirected to the version that you think Googlebot would need to see. So some kind of a banner would always make sense. With regards to the wrong Spanish version showing, that's probably due to the content being seen in our systems as being identical or almost identical. What happens then is we fold the versions together for indexing. We try to use the hreflang to show the right URL in the search results, but we fold them together for indexing. And you might see some effects of that, both in Search Console and the reporting, because we report on the canonical URLs in Search Console. And you might see some times where we show the wrong URL in the search results because we couldn't work it out with the hreflang properly. But that's something where what I'd recommend doing here, especially if this is your home page and not one of the many product pages that you also have, I try to make sure that the content on these pages is significantly different so that our systems don't assume that they can fold this content together. So that's generally the direction I would head there. So on the one hand, for your question in general, make sure the pages are significantly different. And then with regards to people landing on the wrong version anyway, make sure you have some kind of a banner as a backup. I'm working on a website where less than 1% of the content can be considered erotic. However, 99% of the queries shown in Search Console are erotic queries. We wonder if our website is tagged as an erotic website because we would like to have traffic on the full website too. Or should we delete or no index our small amount of erotic content? So in general, this is something that can be a little bit confusing in regards to the queries that you see. I am assuming what is happening here is that there are just a lot of people searching for these maybe erotic keywords. And your site happens to be ranking for those keywords. So that's why a lot of people find your site there. But that doesn't mean that your site is not visible for other queries. So even if you were to remove that content completely, that would not affect how the rest of your site would rank in the search results. It's a little bit different if a significant amount of your site's content would be considered for adults. Then in cases like that, our safe search algorithms might kick in. And they might say, well, perhaps the whole website should be filtered by a safe search. And then that would be a case where kind of that erotic content that you have would be affecting the rest of your content. But if it's really a tiny amount of the content on your website, then that's not something that would generally be happening there. So from my point of view, you wouldn't need to remove that content. It just makes it sometimes weird to look at the statistics because there are lots of people searching for this small amount of content, and few people searching for the rest of your content. But removing that small amount of content wouldn't mean that you would have more people finding your site for the other queries. Everyone knows about the source-medium combinations, Google CPC, and Google Organic. But does anyone know what is Google slash web search? So this looks like a Google Analytics question, and I really don't know what I can say about Google Analytics. So I'd recommend checking in with their help forum or checking in with them on Twitter to try to get this answered. I really don't know how that would be compiled there. We think there's a serious bug in Google Discover in Sweden. I think we talked about this one last time, too. I passed this on to the Google Discover team, I think last week or the week before last, when this came up last. But I don't know what the status is there. Usually with these kind of things, we pass them on to the appropriate team, and they try to prioritize them appropriately. It's not always a case that they'll just go off and fix it completely or that they even agree that there's actually an issue here. But I can double check with the team here to see what's up with that. In the developer docs, Google says an organization logo should be a minimum size of 112 by 112 pixels. Considering the number of potential uses for the image would larger dimensions, maybe 1,000 by 1,000, be your best practice? Also, does it matter whether the image is square or rectangular? To be honest, I don't know the specifics of this kind of logo and the markup and its usage. But if we have a minimum image size, you're always welcome to provide larger image size. So from that point of view, if you think that it makes sense to have a larger image, that's totally up to you. With regards to square versus rectangle, I would double check the documentation here because there's some kinds of images where we need to have a certain image ratio and other types of images where we don't really have any guidelines on what image ratio it should be. For the cases where we don't have any guidelines on the image ratio, I would check to see how those images are actually used in search. Sometimes we use more rectangular images in search, and in that case, it probably makes sense. But if you see that we're using a square image and you provide a rectangle, then we kind of have to fill up things out or crop kind of the center part of the image. So I double check that there. Question about organization schema markup. What's the difference between an ID and the URL of an organization? And when should we use a different ID for the same organization? I don't actually know. So good question. I'll need to try to find something here. What might be useful in the meantime is to post this in the Webmaster Help Forum. I believe there's separate sections specifically for structured data and rich results and see if someone there has any input on this specifically. Are backlinks important in ranking factors? Because nowadays, 80% to 90% of websites are buying backlinks, which I think is very unethical. But these websites are also ranking on the first page. Why is that? So we do use links in our ranking algorithms. We use a ton of other factors as well. So it's not the case that links is the one thing that will make your website go up in the search results regardless of what other people do. This is something where we also see that a lot of sites do things that aren't really necessary for their website and web search. They'll go off and buy a ton of links, and then we ignore all of those links. So just because you're seeing people doing something that looks kind of weird doesn't necessarily mean that they're actually profiting from that in the sense that there are lots of reasons why sites can rank in the search results, and it doesn't necessarily have to do with anything sneaky that they're doing. So we get this question around links. We get this question around keyword stuffing, around hidden text. All of these aspects come up regularly. And in pretty much all of the cases that I've looked into, where we work together with the web spam team and the search quality team to double check why these sites are ranking, it is pretty much always because of other things. And that's on the one hand that can be kind of frustrating if you're seeing people doing crazy stuff and you're doing normal things and not showing up in the search results. On the other hand, that's also useful because our systems are used to people making mistakes, and they try to do the right thing regardless. So if you went off and bought links in the past because someone told you to do it and you didn't realize that was a kind of a useless thing to do, then it's not going to be that we're going to remove your website completely from web search. It might still appear in the normal search results based on the other things that you're doing really well. If your e-commerce site's organic traffic and keywords have dropped by more than 90% after a Google algorithm update, what optimization would you do? That's really hard to say. So that's something where I don't have any specific guidelines on what exactly you would need to do. Because there are lots of reasons why site might drop in search results. It might be due to something we changed in our algorithms. It might also be due to something technical on the website itself. So this is something where if you're really not sure which direction to go, I would go to the Webmaster Help forums and try to get some help from other people who've run into similar situations who can give you some tips on what to double check. Sometimes there are simple technical things you can do. Sometimes it takes kind of a rethinking of the website in general, which takes a long time and is really not that easy to do. Question about pagination. Is it bad for SEO or seen as duplicate content to have the same copy and image on the top of every page in a paginated series? For example, two examples here. In this instance, the URL parameters for page have been set in Search Console to paginate and let Googlebot decide. Finally, should the H1s and page titles also be different for paginated pages to reflect the page number? So headings and page titles kind of help us with paginated series to understand that these series belong together. So if they're the same or if you have something like a number in the headings and titles, then that helps us a little bit. We can also generally figure this out through the links on the page where we see, well, this one has a link to the next page and it has a link to the previous page, then maybe that's a part of a series that we can hold together. With regards to duplicate content, with regards to the header on a page, the general layout of a page like this, that's something you generally don't need to worry about. So we would recognize this as duplicate content, but it's not that we would demote the website. Because of that, it's more that we recognize there are some blocks of text on this page that are the same as other pages on your website. So if anyone is looking specifically for something within that block of text, then that's something where we try to find the most appropriate version of that piece of content on your website. But it's not the case that we would say, well, this piece of text is duplicated multiple times, so we will treat this website as being bad. So from that point of view, usually this kind of pagination where you have blocks of text or images that are shared across different versions, that's perfectly fine. I wonder whether Google algorithms transfer authority if there are more than four redirects to one page. Once we set it to HTTPS, it was a huge fall in search traffic, so I found that there are many redirects from all domains and redirects to HTTPS might be the fifth or the sixth one. I also found only one answer to this. Google uses the first four redirects for SEO. So this is kind of a tricky thing in the sense that on the one hand, you're correct. We follow a limited number of redirects. However, we follow a limited number of redirects during the crawl of one page. So if we want to access one URL, then we will try to follow those redirects until we get to the final state. And during one crawl cycle, we'll follow, I believe, up to five redirects before we say, OK, we couldn't reach the final page. It seems to be redirecting a bunch of times. We will check again maybe tomorrow. And then we'll start again tomorrow from that final state that we reached to kind of see how many redirects we can follow from there. So it's not the case that a website would be demoted because it has a lot of redirects. It's just that if we see a lot of redirects during the crawling of URLs that we try to crawl, then it might be that it takes a little bit longer for us to actually reach the content. So it's always a good practice to redirect as quickly as possible to the final state. Then you don't have to worry about this. It also makes it faster for users. But once we figured out what the final URL should be by following all of these redirects, we'll generally focus on that one anyway. So the next time we try to crawl something from your website, instead of starting at the beginning of all of these redirects, we'll start at the end and say, well, the last time the content was here, maybe this is where the content will be and we'll try to crawl from that specific place. So with that in mind, if you're shifting from HTTP to HTTPS and you have some old domains and you have a redirect from dub dub dub to non dub dub dub and all of these could theoretically be lined up, it's not the case that your website will rank worse because of that because we'll follow all of these redirects. We'll remember that final URL and we'll focus on that final URL for the future rather than trying to follow all of these redirects all the time. So it's kind of like a one time thing where everything is a little bit slow until we follow the redirect. But the next day or the next time we crawl, we'll start where we actually find content. How do you see SEO change in the next five years? Oh my gosh, I have no idea. It feels like sometimes very little changes from time to time, but sometimes there are bigger changes as well. I don't know. I could probably spend an hour discussing various ways that SEO might change. I think in general, SEO is not going to go away. So don't worry about that. URLs are going to continue to matter, having the proper technical setup on a website that continues to be very important. And on the other hand, things that will always change as well is the user expectations. And when the user expectations change, then it's a matter of also making sure that your website remains relevant for the new user's expectations. But from a technical point of view, I think there are a lot of things that essentially will continue to work because it's a good basis. There's much debate around Google's interpretation of schema and thus the inclusion of rich snippets into results. We spend significant time adding schema markup to a site with little to no results. How does one try and influence this process a little better so that we can best serve our customer needs in the search results? The industry, these specific search terms has seen mass removal of snippets for many terms they historically existed for. I have no idea which industry you're talking about, so that's really hard to say. In general, there are a few things that we do care about with regards to rich results in general. On the one hand, we want to provide something in the search results that gives the user a little bit of a preview of what they would see when they go to a page so that they can easier understand which pages are relevant for them. So if you're doing a search for something and the rich result that we show is something that provides value to the user and makes it so that they understand why your page is the best one that they should go and visit, then that's something we'd like to show. On the other hand, if the rich results that you're providing just add extra bling to a search result, kind of like click here, click here, rather than actually giving the user more information about why this specific page is more important, then that's something we probably wouldn't want to show. So that's kind of the general idea there. And that's something that can evolve over time, because we will have newer rich results types in the search results over time. And people will see these and say, oh, well, I can highlight the value of my content there better. And that's kind of a good use. And there will also be people that say, well, I can take advantage of this extra room in the search results by drawing more attention to my site without actually giving more information about why this page is more useful. And people will try to kind of do sneaky stuff along those lines. And that's something that I think will continue to evolve in that people will try new things out. We will launch new features in the search results. We'll see where it makes sense and where it just adds extra clutter in the search results. And these kind of things will continue to change. So that's something where we do spend a lot of time discussing the policies around this. We try to make the policies around rich results as transparent as possible, because it's not kind of our idea to make it hard to appear well in search. But rather, we want to give you some information about where we think it makes sense to add extra value so that users understand why your page is the one that they should be going to. What is quality content in Google's eyes? If two people are writing on the same content, it's possible that they have a different opinion on the same thing. Then how does Google decide which one is better? Because I'm seeing that Google is sharing reviews and locality, but some people buy things and the other people don't. I'm not really sure where this question goes. With regards to quality content in general, this is something where you, as the site owner, probably know a lot more about what is actually quality content for your specific kind of site. So that's something where I wouldn't worry too much about what Google thinks about quality content, but rather you need to show that you really have something that's unique and compelling and of high quality. So instead of trying to work back how Google's algorithms might be working, I would recommend trying to figure out what your users are actually thinking and doing things like user studies, inviting a bunch of people to your office or virtually to show them something new that you're providing on your website and ask them really hard questions where sometimes the answer might be, we don't like your website or we're confused by your website or we don't like the color of your logo or something. But this is the hard feedback that's really important to get. And a lot of times, these are things that you might not agree with, but if all of your users are saying this, then maybe that's something that you need to consider as well. This is something we do all the time as well. We do A-B tests in the search results all the time to see how can we make sure that we continue to provide relevant results even when users' needs and expectations continue to change over time. We do user studies in Search Console as well where we try new features out and try to kind of see which ways users are either confused by these new features or which way they can work better with these new features. These are things you always need to do, and you should focus on your users rather than on how Google's algorithms might currently be trying to figure out what is high quality content. One of the other reasons why you shouldn't be focusing on how Google's algorithms figure this out is that Google's algorithms will also continue to evolve and will also continue to focus on the users and see what they need. And if you're just focusing on Google's algorithms, you're always a step behind. So try to focus on your users and figure out what their needs are and what you can do to provide something that is really unique and compelling and different from everyone else in that area that you're active in. A few questions around breadcrumb markup. I noticed Google will drop the final page from the breadcrumb trail displayed in the search results. I think we do. Does Google drop the final page from a breadcrumb trail? If the page title is not included in the breadcrumb list, that's visible to users on a page. Even if the page title is included in schema markup, will an excessive character length cause Google to drop the final page title in a breadcrumb trail in the search results? I think we looked at this last time, actually, or maybe in one of the other hangouts. But in general, I'm not sure if we do this by policy or if it's just kind of the way our algorithms are set up at the moment. But if the last breadcrumb trail is actually a page that is being linked to, like the page that's shown in search results, it doesn't really make sense to show that twice. So that's something where I think kind of just from a usability point of view, dropping the last item, if that's the page that it's actually on, is kind of reasonable. It's not so much a matter of the character length there. But depending on the type of search results, obviously, there is limited space available. So it might happen that we have to cut things off at the end or cut things out in the middle. And you'll see those kind of dot, dot, dots in the breadcrumb trail along the way there. But that's something that I think is just mostly focused on usability. So it's not that your website will rank better or worse if you include your last page in the breadcrumb trail as well. We have a separate m.site. Ooh, oops. Now I lost the question just a second. Too many windows. We have a separate m.site. After mobile-first indexing, do we need to make any change on our canonical tags, structured data, or a site map? Currently, we're using the desktop URLs. Everywhere in m.urls are used in alternative tags. If you have the m.site set up properly with the kind of link rel alternate to the mobile version and link rel canonical to the desktop version, that's perfectly fine, also with mobile-first indexing. We recommend with mobile-first indexing or in general to try to have just one version of a site. It's not that we would not support m.sites in the future anymore. It's more that things are just a lot easier if you just have one URL rather than two URLs for the same piece of content. So that's kind of the direction I would head if you're looking at a revamp of your website at some point, then try to go towards responsive design or dynamic serving, where you have just a single URL for your pieces of content. On my price comparison domain, the error submitted URL seems to be a soft 404, continuously increases, possible reasons that I think they have been no index since a very long time, or we changed a meta-robots index to no index frequently. My question is, how to take out this URL from such issues and get them indexed? Yes, if you have a URL that has a no index on it, then we might treat it as a soft 404 URL. That's kind of equivalent in that sense in regards to how it would be shown in Search. So that's generally not something that you'd need to kind of work around. With regards to making these pages indexable again, that's something where essentially you need to let us know that these pages have changed so that we can recrawl them. And once we see that there is actually content on these pages and that there's not a no index blocking indexing, then we will try to index that page normally again. So that's something where you can do the usual things, which could be like submitting a sitemap file with the last modification date of the time when you changed it from no index to index. And then we can go out and double check those URLs again and see that there's actually content there. But if there is no content on those pages, if it's like an empty search results page, if it's a page that has a no index metatag on it, then it can happen that we flagged that as a soft 404. And that's kind of correct in a case like that. Can adding dofollow hreflinks increase the time to download a page in a crawl status report? Not really sure how that would be connected. From February 16, our crawl stats is showing a heavy drop in crawling due to an increase in time to download a page. Coincidentally, from the same day we're seeing soft 404 errors that has increased from 3,000 to 20,000 pages. Crawled currently not index has also doubled. Number of valid pages has dropped from 292,000 to 205,000. One change that we made on our site on the 6th of February was we made our brand filter crawlable on around 1,000 index follow pages. On a few pages, it's added more than 500 crawlable links. Should we revert this action? So I think, in general, this change shouldn't be affecting the time to download a page significantly, other than the fact that if you're adding content to a page, which could be links, which could be just text, could be images that you're linking to, anytime you're adding additional content on a page, then obviously we need to download that content in order to index that page. So if you're adding a significant amount of content, then that could be affecting the time to download a page. But if you're just adding, I don't know, a small amount or a relatively small amount compared to the rest of the page, generally, that wouldn't be affecting that. However, what it sounds like is happening is you're not just adding content to these pages, but you're actually allowing indexing of a significant amount more content that's being dynamically generated on your website. And my guess is what is happening there is that your server is just spending a lot more time to generate the content on these pages. And that's the time that we see with regards to downloading individual pages on your site. So it's not so much the amount or the content itself, but rather the time that it takes for your server to generate that content, which you're probably seeing there. And is that good or bad? I don't know. It's hard to say. Sometimes if the content that you're providing like this is critical for your website, if the internal links there are critical for your website, then maybe you need to bite the bullet and say, well, my server is a bit slow. And that's just the way it is. And that might be OK. It might also be that this time is increasing significantly more than it was before. And that could result in us, for example, crawling less of your website. And that might be the kind of thing where you'd want to look into either speeding things up on your server or finding a way to streamline things in general with regards to how you serve content. So the common approach here would be to use more caching so that your server doesn't have to calculate all of these links and look them up from a database again, but rather it can pull them out of a cache fairly quickly so that these pages can be created a little bit faster. The other thing, I guess, worth looking at here is depending on the type of the website that you have and the setup that you have, if you're allowing filter pages to be completely crawled, that can easily result in a giant mass of URLs that your website is suddenly providing that we think we need to go off and crawl. And depending on the type of your website, maybe those pages aren't of the highest quality possible. So if they're just combinations of existing things that you're providing again, and suddenly you just have 100 times the URLs as before, then you're adding a lot of overhead for something that brings you very little value. So that's something you might want to reconsider there as well, or at least look into the details of what exactly you're suddenly creating. How accurate is the crawl stats report? In our case, Search Console is showing time to download a page increased, and the pages downloaded per day also reduced, but our logs are showing that crawling dropped when we made some changes. So these reports are accurate. They are based on what we pull out of our crawling logs. But one tricky thing here is that these crawl stats include URLs that are also fetched from other services at Google that use the same infrastructure as Googlebot for crawling. So for example, I believe the Google Ads landing page checks are done with the same infrastructure. The product search crawling is done with the same infrastructure. And that means that when you look at those stats overall, it's not that you can just add up the Googlebot requests from your server logs and get the same number. You might need to add the other crawling that we do for that number as well. Our structured data on our mobile site has slightly more attributes information than our mobile site. Is that OK? Which version will Google pick? Currently, we're on desktop-first indexing. So I'm guessing the desktop site has slightly more than the mobile site. So when it comes to mobile-first indexing, we will only use the mobile version of the content that we get. So if your mobile site has less structured data than your desktop site, then we will use only your mobile site, and we will not use anything from your desktop site. We still crawl your desktop site occasionally. Usually, that's something like 20%, 80% split. So 20% maybe desktop, 80% mobile. But we crawl the desktop site primarily to make sure that we're not missing any new pages. And we focus for indexing. We focus purely on the mobile version. So if your mobile site has less content, less structured data, fewer images, lower quality images, anything like that, then with mobile-first indexing, we will focus on that. And we will not know about the things that are only on your desktop site. Mobile search results, Google is showing product images with each website ranking in traditional blue listings. Is there any SEO we can do so that Google can show our product images too? Do we need any HQ images for that or alt tags? So there's product structure data that you can do. And there's some requirements around the product structure data that you can use. So I would double check that. There's also, I believe, the Merchant Center, which is kind of like a console that you can use to control the crawling and indexing and kind of like how product search results are put together. And I don't know the details there, but I believe at the moment, anyone can access the Merchant Center even if you're not running product ads. So those are kind of the two places I will double check. I check my page with the W3C validator and it showed me 300 errors. Can it slow down the time to download a page? No. This does not affect time to download a page. Time to download a page is purely the time that it takes from Googlebot asking your server for a URL to your server, having provided that full content to Googlebot. What is on that page is totally irrelevant, other than that if you have a lot of text, then maybe it'll take a long time to transfer. But HTML errors are totally irrelevant for that. In general, the W3 validation is something that we do not use when it comes to search. So you don't need to worry if your pages kind of meet the validation bar or not. However, using the validator is a great way to double check that you're not doing anything broken on your site. So in particular, for other kinds of devices, for people who need accessibility features, the W3 validator is a great way to kind of get a confirmation that the markup that you're providing is pretty reasonable and is something that most kind of consumers of markup will be able to understand well. So I definitely recommend checking out the validator tool and trying it on your pages and seeing what the results are and then trying to improve things so that you're a little bit more in line with really valid HTML, because that just generally makes things a lot easier when it comes to displaying your pages, when it comes to understanding the content on your pages for things like screen readers. All of that makes it a lot easier if you have reasonable HTML. I'm currently working on a site that wants to make use of Google Jobs that's coming to the Netherlands. I'm curious if there's a way to check if the job is already indexed so I can skip the update delete request to keep queries as low as possible. I don't think there's any automated way to check if a job is already indexed. So I continue using kind of the indexing API for this and using kind of the update delete request to update those as needed. I know when we're migrating a website from one domain to another, we should use 0.1 redirects from the old domain to the new domain. Does the redirection have to be in place forever to pass link juice, or does it just need to be in place until Google picks it up once and then we can stop paying for the old domain order to redirect from the old one to the new one? So 301 is a permanent redirect. And kind of how the name says, it's a good idea to have that permanently in place if you care about that. So from our point of view, our guidance is generally that this redirect should be in place for a significant amount of time so that we can really be sure that it's there. From a practical point of view, I like to recommend at least one year of having this redirect in place. Personally, I would try to keep that in place longer, as long as you can. Sometimes there are technical reasons why it's not possible to keep that redirect in place for a longer time. But I would try to keep it in place as long as possible. Definitely as long as you continue to see users or search engines kind of go to the old URLs and get redirected to the new one. So you can double-check your logs on that. Sometimes people link to your old site and you forgot to update those links. But that's something where I try to keep it in place for as long as possible, and definitely at least one year. With regards to stop paying for the old domain, I would be really cautious with that. Because Babermas love to pick up old, expired domains, and they put all kinds of crazy stuff on it. And if the old domain is, for example, your old brand name, then suddenly your old brand name could be used to promote some really crazy spam on the web. And sometimes that's not really nice to see. And especially if you've let that domain expire, then you can't do anything about it. It's not that you can go to Google and say, well, this used to be my website. Therefore, I demand nobody else is able to rank in the search results for that. That's not possible, because if you don't control that domain, then anyone can host something there. And sometimes that's reasonable content, and sometimes that's stuff you don't want to be associated with. So personally, I try to keep those old domains as long as you can. And if you're keeping those domains, then maybe also keep those redirects in place as long as you can. We did a crawl with Screaming Frog and realized it seems I've identified pages with hentry microdata as article markup, while in Google's structured data testing tool, it only recognizes hentry. I wonder, should I continue to put extra article markup in addition to hentry, or hentry will be enough? I would recommend using the rich results test for this and double checking the markup that is actually required for the article markup in our developer documentation. So the structured data testing tool tries to pull out all kinds of structured data on a page, which includes things that might not be used at the moment. So I really focus on the rich results test, which focuses more on the rich types that are actually shown in search, and double checking that with the documentation that we have. Wow, OK. Still a bunch of stuff, but we're kind of running low on time. So maybe I'll just open things up to any of you if there are any questions here. John, I have a quick one. OK. So if you hang out to go, I asked you about a very big, large website that's most of their, it's an e-commerce website, and they have like 1,000 categories, and all of them are placed in the main menu. And you mentioned it shouldn't be a problem for you from a technical point of view. You should be able to see those links and crawl them. And one thing I didn't address was whether is that a good practice? Because if you have all of those 1,000 categories in the main menu, then probably Google kind of sees them as being at the same level of importance, sort of. So obviously, there are certain categories with subcategories, and the parent categories are targeting more broader keywords. The subcategories are targeting more niche keywords, as is usually the case with e-commerce websites. So having this kind of very flat architecture, can that become a problem in terms of rankings? Because Google pushes out its ranking signals kind of equally instead of using a hierarchy in that sense. So can that be a factor? Having a clear hierarchy helps us a little bit. But it's not so in most cases, it's not going to be something that has a strong ranking effect in that regard. So what I imagine will probably be trickier with the case of a really flat hierarchy is that we won't know which pages belong together in that when we try to show site links, for example, in the search results where we try to say, well, this is the main page, and these are the pages that are kind of subpages from this page, that will be a lot harder to do if there's a really flat hierarchy because we don't really know what the structure is. And that's something where if these are all categories and subcategories altogether, maybe it doesn't make a big difference. Maybe we're still showing some useful site links there. That's something you can double check. Where I have seen it cause a little bit more confusion is if you have a paginated set and you link to all of those pages from the start. So that's kind of a similar situation in that you have a flat hierarchy rather than kind of this paginated set that kind of goes down. And that's something where I have seen us do weird things like show page five of a paginated set for kind of a generic keyword because we think, well, all of these are kind of equal. And page five seems to be the best one there. And as a webmaster, that seems kind of weird. Like why does it start with page five rather than page one? So that's something where I think from purely from a ranking point of view, you probably won't see a big effect. But from understanding the connections between the pages, things like site links, the pages that are shown in the search results, that might be something where you would see some kind of fallout. Just from telling Google, all of these pages are equivalent. And then we say, oh, well, all of these pages are equivalent. So we'll just pick whichever ones we like. Whereas if you give us a clear structure, then it's like, oh, well, this is the most important. This is like the second kind of most important. Then we can focus more on that. So I was just asking about rankings because of kind of how internal linking works, especially since subcategories targeting very niche topics, it's much easier to rank wealth for those because there's a lot less competition and harder to rank to the parent categories because there's more competition. So from an internal linking point of view, wouldn't it make more sense to only have those parent categories in the menu and not have the subcategories as well simply because there'd be more focus on ranking emails going to. Yeah, I think that makes sense. That's something where sometimes that also shows up, I think in the forums or on Twitter, I've seen that where people say Google is ranking the wrong page for my keywords. I have this higher level category page, and Google is ranking this low level product or kind of subcategory page. And sometimes that also comes from us just not having a clear hierarchy of understanding. This is really the broader term, and this is the more niche term. And they're kind of above and below each other. So I'm guessing it shouldn't be very flat. It shouldn't be very, obviously very deep either because you don't want to get 10 pages deep to find something. Probably it should be best to be somewhere in between. Yeah, I think that's the part where everyone struggles is like, what does it mean? Like a balanced hierarchy? Like does it have to be square? Or I don't have any explicit shape with what it should be in, but it shouldn't be completely deep. It shouldn't be completely flat. It should be somewhere in between. I'm guessing would it be a good way to kind of go about this instead of moving from that very, very flat architecture to something more drastic? I mean cut out all of the subcategories, maybe move category by category. Like just remove the subcategories from this one. Let's see how it goes. Yeah, OK. Yeah, I think that's in general, that's always a good idea because you're testing things step by step and you see what the effects are. And if at some point you're like, well, it makes absolutely no difference, then you might as well save yourself the effort from doing that for the whole thing. So kind of going step by step and seeing is it actually doing what I expected to do, that's generally what I'd recommend. Cool, thanks. Cool, OK. So let me pause the recording here. I'll stay on for a little bit longer if any of you want to hang out a little bit off the record. If you have any questions that you didn't want to ask live. But thank you all for joining in. And thanks for watching the recording, if you're watching the recording. And I wish you all a great weekend. And stay safe, wash your hands, of course. And see you next time. Bye, everyone.