 All right. Welcome, everyone, to today's Google SEO office hours hangout. My name is John Mueller. I'm a search advocate at Google in Switzerland. And whoa, that's my telephone. Just a second. This is awkward. This is so funny. You just thought on very well. I don't have those problems. Oh, man. Yeah. Like, nobody calls me except now. Maybe that was me. Oh, man. OK, well, part of what we do are these office hour hangouts where people can jump in and ask their questions on Google Meet, not by telephone, about their website and Google search. So a bunch of things were already submitted ahead of time. I can run through some of those. For some reason, it looks like people aren't jumping into the hangout at the moment. That might be, I don't know, because they're all busy, or it might be something on Google's side. But we'll kind of get started anyway. Maybe more people will jump in later on. Do any of you want to get started with a first question? Come on, Barry. What's up with the request indexing thing? What's with the survey, taking it away from us? Are you taking goodies away again? I'm not planning on taking away anything. I'm like, I think this is one of those features where the various teams at Google really kind of love the data that they're collecting there and love to get these things into the index as quickly as possible. But unfortunately, that sometimes also attracts attention from people who are using it to try to get spammy stuff indexed. So one of the things that we've been thinking about is, is there something that we could do to make sure that the functionality that people need or why they kind of use this tool is covered automatically, so that people don't need to do anything manual? And that's something I kind of noticed on Twitter where there are lots of people who are coming with reasonable reasons to use this tool. And I feel like we should just be able to handle all of that automatically so that people don't need to do things manually. And there are no plans to kind of disable the tool or kind of take that away or anything like that. But at the same time, if we can handle more of these requests automatically, then it just saves everyone more time and is a little bit more efficient. So that's kind of the direction I was going there, to figure out what do we need to be doing differently so that you don't need to use that manual tool unless there's really an exceptional use case there. And a lot of the things I saw that were submitted in the form were really kind of useful, where it's things like, oh, if I don't do anything manual, then it takes two weeks to get a new page index. And from my point of view, that seems like something that shouldn't be taking so long from our side. So we should really kind of take some of these examples and work to kind of improve our system, at least from my point of view. That was kind of the background there. And it's not related to any of the indexing issues that we had in the past. It's really just people are kind of working on this, trying to figure out what the right approach is here. OK, I will just kind of pop through some of the questions that were submitted. We'll see if more people can jump in as well. Let me double check that the link is actually all posted. Yeah, I don't know. We'll see. What effect does redirecting WordPress attachment pages to media files, as recommended in the Yoast plugin, have on the chance of ranking in Google Images and website SEO in general? That's an interesting question. I don't know exactly what the Yoast plugin recommends or what it does specifically there. From my information, in general, you would have the images embedded in the individual blog posts or in the individual pages of your website. And you would also have the same image on these attachment pages. And usually, when it comes to image search, we would be indexing the landing page, your blog post, and the images that are linked there. So the attachment pages are kind of something that we probably don't need to pick up, essentially. So my guess is probably redirecting from your WordPress attachment pages to media file. Probably that doesn't change much, because we're probably not indexing or visibly showing those attachment pages anyway. So you're kind of redirecting the users more than you're redirecting search engines. But I don't know the details of what the plugin is doing there or how WordPress, by default, handles these kind of attachment pages. Various country-specific sections of our website lose visibility periodically. Since the end of October, the amount of the index pages has increased dramatically by tens of thousands. The pages that take their place in the search results are historic. They've had and still have 301 redirects for many months. Search Console indicates that Google is choosing the old pages as canonical. Thousands of pages are now not found on any URL, not on new or the old URL. The sites on the same domain sharing the language with affected sites start to rank instead. For example, in Google.de, people see many of our Austrian pages instead of the German pages. The pages are mapped correctly with hreflang tags and XML site maps. So it kind of goes on with some more details. I think specifically I would need to have some sample URLs to know for sure what is kind of happening there. But kind of taking a step back, when you have redirects from one page to another, from our point of view, what happens there is we take those two URLs, the old URL and the new URL, and put them into a shared cluster that we then use for canonicalization. So we essentially say these two URLs lead to the same content. Which of these should we be showing? And we use redirects to try to figure out which of these we should be showing. But we also use a lot of other factors, things like internal links within your website, external links, sitemap files, other annotations that you have on these pages. All of that kind of comes together. And with all of those extra pieces of information, we then say, OK, for each of these pages that are in this cluster of pages that essentially lead to the same content, which of these is the best one to show. And sometimes that can be the redirect source. Sometimes that can be the redirect target, depending on essentially what the bigger picture says. So just that things shift from the redirect target to the redirect source at some point, even if those redirects have been in place for a while, that's not necessarily broken from our point of view. So that's not something that we would say is unusual or anything that you need to fix. Because purely from a ranking point of view, nothing changes there. It's really just the URL that are shown in Search. There's nothing with regards to different ranking there. It's purely just the URL. Of course, the part that's also associated with this is in Search Console. We show the data by canonical URL. So you will also see that shift in Search Console, and it makes tracking a little bit confusing sometimes. So I agree that can be annoying. One of the things that I would recommend doing in a case like this, especially if you're seeing this change happen on a large scale across your website, is to double check all of the other factors that are involved with canonicalization. So in particular, I would look at things like internal links, sitemap file, other annotations that you have on these pages with regards to hreflang, or any kind of cross link that you have there, and make sure that they're all aligned with the URL that you now want to have in your X. And the more you can make all of those factors align, the more likely we'll choose the URL that kind of comes out on top after canonicalization. So from our side, again, it's not a sign that we're de-indexing pages or that anything is broken. It's just we're picking the other URL to show instead of this one, and we're showing it the same place. And the same can happen with international content, especially if the content isn't the same language. Then we might say, well, this content is primarily the same content. One of these pages is for Germany. One of them is for Austria. It's essentially the same full content there. And in a case like that, we will, perhaps, not always, we will perhaps choose one of these pages as a canonical because we say they're essentially the same. They deserve to be in a canonical kind of cluster. And there, again, we'll take the different kind of factors into account for picking the canonical URL. And if you have hreflang additionally on these pages, we will pick one of these as the canonical URL, but we'll still swap out the URL that we show in the search results depending on the user's location. So that makes it a bit confusing because if you look in Search Console, everything is focused on the canonical URL. And you don't necessarily see that we're actually showing the other URL in the search results for users from that location. So my guess, again, I don't know because I don't know which site this is. My guess is when you're seeing this change from Germany to Austria, for example, for your sites, then probably that's what's happening. And if you explicitly search in Google Germany or with the settings for content for Germany, I think that's with the parameter gl equals de in the query on top, then you should see your pages essentially showing in the search results. And if you hover over the link, you should see the appropriate localized version. So from our point of view, that's kind of working as expected. It's not something we would consider a bug, but it is confusing. And we've been going back and forth with the team that works on hreflang for a while now to see if we can change that somehow. But I don't see that changing in the short term. Essentially, from our point of view, it's working as intended that we fold these pages together for canonical cluster if they're essentially the same content, if they're not the same content, then we would kind of track them separately again. And it's working as intended that we would use hreflang to pick the right URL to show. And it's also working as intended that in Search Console you see the reports based on the canonical. And all of these things add up. And they kind of make it a bit confusing to track. Let's see. With BERT coming out, will the importance of the exact keyword, exact match keyword decrease? So I think BERT has been working in various ways for quite some time. So BERT is essentially a machine learning setup, I believe, for understanding essentially the content a little bit better. So the queries that people type in, understanding those better, and understanding your page's content a bit better. And with all of these kind of machine learning approaches, we try to figure out what these pages are actually about, what the query is actually looking for, and we try to match that a little bit better. And from my point of view, all of these changes that have been happening over the years, they do lead in the direction that you don't have to have the exact keywords on your pages anymore. And this is something that I think SEOs have seen kind of maybe subconsciously over the years as well, where they realize, oh, you don't need to have singular and plural versions on your page. You don't need to have all of the common misspellings on your page. All of those things are less critical on your pages as long as you really match what the user is actually looking for. So with that in mind, you could say that it's kind of going in the direction of decreasing the importance of the exact matches for your keywords in your content. But it's not that that is the goal of these algorithms, but rather our goal is to understand the big, vast amount of content out there a little bit better so that we can show the right versions to users when they ask. Um, I have a question about the FAQ page schema. I read the guidelines, and it says, text sent through schema markup and visible text should be the same. However, what I'm not sure about is HTML. For example, would I be allowed to have affiliate links in the text that is visible on the page if I remove the A tags from the JSON data so that the affiliate links don't show up in the search results? The text would be the same as per the guidelines, but the HTML would be slightly different. That would be perfectly fine. So that's something where from our point of view, the important part is really that the text on the page is the same. I also double-checked the guidelines for FAQ page markup in general, and I saw that you could even leave the links in the answers themselves. And I've seen some sites do that, where if you've kind of fold out the question to get the answer, you have a link directly in the search results. And depending on your site, maybe that's something that you're actually kind of keen on. Maybe that's something that you don't care about. But whether or not you link the kind of the answer in exactly the same way as you have it on the page itself, I think that's ultimately up to you. The important part is really from our side that the text is the same. And in particular, the goal behind this is we want to make sure that when a user clicks on a search result that they're not surprised by the content there, that they really kind of know what is coming, what to expect. If you were starting a new website today, what technology would you use that's best for crawling, indexing, and ranking? What technology? Probably HTML? I don't know. I mean, what do you make websites out of? And where do you start counting where the technology comes in? I think that's really hard to say. I don't know. I think if I were starting out with a new website and wanted to create something new, I would try some new technologies out and see how they work. It also depends on what kind of website it would be. If it's something that needs to be interactive, then obviously you'd want to look into some of the more modern formats, maybe even a JavaScript-based framework. If you need something to just display content, if you wanted to write the code yourself, maybe using AMP would be an option there to make really fast pages. If you just want to get on with writing content, you don't have time to actually figure out all of the technical details, probably using a CMS, something like WordPress is a good approach. And some of these systems you can combine as well. I know there are plugins for WordPress that will make AMP pages out of your site, so you could have both of these at the same time. I don't know so much how you would combine WordPress and JavaScript sites, but I'm sure there are plugins for that too. But I think the essence of the answer is there are many different ways to make websites nowadays. And luckily, most of these technologies, they just work well in search by default. So finding a technology that works well for you, that works well for the people that you're working together with, who can help you to get things set up, who can help you solve the hurdles that come along with any kind of new website or business that you roll out, I think that's kind of what you should be watching out for. When setting up a Data Studio report, using the recommended Google template, there's a huge discrepancy between metrics reflected in Search Console and the Data Studio report. Impressions, clicks, et cetera, why is that, and what is the solution? I have absolutely no idea, sorry. So this is something where I guess internally, I defer to Daniel, who's worked on a lot of the analytics and Data Studio side, who might know a little bit more on this. But another approach would be just to check out in the Webmaster Help Forum and see what folks there are saying. Because Data Studio reporting for Search Console data is something that a lot of people do. So I'm pretty sure there are people who can help you to kind of get that figured out, too. I inherited a website client that has a bit of jumbled URL naming structure. About half the pages are listed as domainname.com slash top level service slash specific page. And another half are just labeled with slash specific page. These pages are mixed so that one top level page has a mix of both naming structures. Does this confuse Googlebot when it's crawling the site? Should the naming be consistent throughout? If I need to change the naming, I would set up 301 redirects. I'm always reluctant to rename URLs. But of course, if I thought my client could get more traffic, then maybe you do that. So from our point of view, both of these structures work. It's not that you need to have a consistent structure within your website, but essentially we need to have consistent stable URLs for your content. And if you find those stable URLs for your content, if they're not changing all the time, if you don't have the content on multiple URLs, the same pieces of content, then essentially we can crawl an index outside model. So from a practical point of view, just for SEO reasons, you don't need to make any changes here. My suggestion, however, would be to try to get this cleaned up a little bit, primarily so that you can track things better. So in Search Console, you can look at things on a sub-directory level. In Analytics, you can also kind of drill down into individual sub-directories. And if you have a clearer structure on your website, then it's a little bit easier to figure out, OK, is this a product page, or a category page, or a blog post, or a news article, or what is it specifically? And that's kind of the direction I would head there. It's not so much that for SEO you will have an advantage, but you might have an advantage in the long run because you're able to better understand how your website is standing in Search and how people are interacting with it. I have two news sites. I post the same content on each one, but only one of the two appears on Top Stories. Why? I don't know why. But at the same time, if you're posting the same content on both of these sites, why should both of them appear in Top Stories? It seems like we should pick one of these and show it there and not both of them. But I don't know what the specifics are for Top Stories ranking and why and when we would show individual ones there. Sometimes posting the same content to two individual sites doesn't mean that you're doing exactly the same thing, but rather maybe one site is really seen as being critical and important and the other is something completely new that you started off with. So just because you're posting the same piece of content to both of these sites individually doesn't mean that that piece of content will be equivalent on both of those locations. So that's another thing to keep in mind in that when it comes to search, we don't just look at one individual piece of content or the text on that page. We look at a lot of different factors. I have a website with recipes and food FAQ. I'm making a text short because my user is like short reading, how to rank better with short and precise text instead of long text recipes, where the ingredients and the how to make list is almost impossible to find in all of the text. I think that's a fantastic idea. Sometimes it does get pretty tedious when it comes to recipe sites where trying to find the detailed information can be pretty hard. So from my point of view, I would definitely go in that direction. Essentially, when it comes to ranking, we don't count the number of words on a page. So it's not the case that you need to fill your page with a long essay of thousands of words just so that we understand, oh, this is an apple pie recipe. Essentially, we look at a variety of different factors for understanding the content on a page. You can use things like structured data to help us understand that automatically as well. And with that, you can also make really short pages that rank really well in the search results. So that's something where there's no kind of trick to ranking. And certainly, it's not the case that you need to fill your pages with tons of text that nobody's really interested in. I'm not quite sure how this initially started with the recipe sites having a lot of text there. I suspect at some point, it has become kind of self-sustaining in that people see these sites sometimes ranking well in search and saying, oh, well, if this other site is doing this kind of big background story on the history of apple pie, then I also need to add a big background story for my apple pie recipe. And probably that's not the case. What are the best Google Analytics metrics for tracking, to track for deciding, using ABE testing, what to place on the top of the website's page, have photography websites in mind, so comparing things like slideshows, grids of images, single images with text, et cetera. I really don't know how you would best compare this with Google Analytics. I would ask someone from the Google Analytics team or check in with their help forum to try to find out a little bit more. Kind of awkward that there's not so many people joining in. I hope there's nothing blocking from the meat side. But I guess we'll find out when I check Twitter afterwards. When you add the YouTube link to the YouTube channel, you really only see it not even when you refresh, but you have to sort by most recent. Oh, OK. So I will just drop it on top of the description. We'll see what happens. OK. Hope that doesn't mess things up. More worry. Does Googlebot support ABIF images, which are significantly better than WebP? I know that it's supposed to be evergreen bot, which means it should support them theoretically, but just asking. Also, when will you change your Google Plus link on your Twitter profile? Oh, man. My Google Plus link. Nobody's going to take that away. Who knows? At some point, if we wait long enough, maybe it'll come back. And anyway, what would I change it to, my Twitter profile? I think that would be awkward. Anyway, back to the more important question of images. I double-checked our public documentation for image search, and we don't have ABIF images listed there at the moment. So I don't know if we would support that. My feeling is we wouldn't support that at the moment for image search. With regards to Googlebot and the evergreen side of Googlebot, that's slightly different there. So in particular, when it comes to rendering a page, then we would essentially have Googlebot and the evergreen setup, which probably would support ABIF. However, that's just with regards to rendering the page as we would use it for web search, so for the textual part of search. So essentially, we'd be able to load the images, but we wouldn't use the images because it's for web search, which is not very helpful in this particular case. My general feeling with a lot of these newer image formats, what I see is that as we see that the usage picks up on the web and the number of people that ask about these things for images, the more we would try to find ways to implement that for image search as well. So my guess is over time, we will probably support these, especially as browser support kind of grows for these different formats. The thing I would watch out for with all of these modern formats is that not all browsers support the modern formats yet. And with that, you always have to have that backup setup anyway where you have, I think, the picture element in your HTML with the different image URLs listed there, which you would use for responsive images, but you can also use for different image formats. So essentially, you'd always have that backup there anyway. So if Google doesn't index your ABIF version, then we'd index the JPEG or the WebP version or whatever you have on your site. So probably for image search is not critical at the moment. I am guessing at some point, we'll make sure to have that supported too. When Google announced page-based indexing, the example it provided in the announcement showed a featured snippet result. You know, both featured snippets and the core results in the 10 blue links will benefit from passage-based indexing. Or will it just result in better answers for featured snippets? We haven't heard much on that front. I honestly don't know. So my kind of taking a step back and just guessing at this with my internal information, usually what happens with these things is we will roll them out in one particular place, experiment a bit to find out how to best implement these, how they best work, and then find ways to roll that out a little bit more broadly. So it might be that we start showing these in the featured snippets first because, I don't know, we showed that example, or maybe that's the clearest way we can check this. And then at some point, we start showing them more in the normal search results as well. The other thing to keep in mind is that the name is a little bit confusing. We called it, I think, page-based indexing. There is really more about ranking the existing pages. It's not that we would index something different. It's just that we would kind of understand these pages, these passages, a little bit better, and be able to pull those out and show them a little bit better in the search results and generally be able to rank those pages a little bit better. So kind of a few things coming together there. But again, kind of like with all of these newer changes in search, usually we try them in a small scale and then roll them out a little bit larger over time. Is it mandatory to have all subfolders working? For example, example.com slash play slash movie exists and returns a 200, but example.com slash play doesn't exist and returns 404. Perfectly fine. We use URLs primarily to understand the individual pieces of content. It's not the case that we would kind of try to make up URLs and see what happens there and kind of say, oh, if these folders don't exist, then those pages should be demoted in search. We essentially see those pages as individual or those URLs as identifiers for those individual pages. And if we find a link pointing there, we will crawl it and try to index it. If we find a link to the higher levels, the directory, and it returns 404, then that URL is just not indexed. It's not that everything below that URL is not indexed. It's just that particular URL is not indexed. So it's perfectly fine not to have all subdirectories return content by default. One of the things I would watch out for here, though, is if you have a plugin that creates the breadcrumb markup for your pages, then make sure it doesn't just blindly link to your subdirectory level parts of your website without those pages actually existing. Because if you want breadcrumb markup to appear in your search results, we need to know that those breadcrums actually lead somewhere. So that's kind of the one thing to watch out for. My feeling is most plugins that implement breadcrumb markup would just focus on the categories that you have defined in your CMS anyway. So probably that would just work. A category or product page is usually divided into components like a product title, description, offers, reviews, similar products, et cetera. Should we make the headings of all components as H2? Or can we make H2, H3 freely? According to the importance of the components and keywords, can this make it hard for Google to understand your page? Assuming there's an H3, there must be a parent H2. So Google is not picky and doesn't expect a completely clear heading sub kind of structure within your website. So sometimes you'll have an H3 and not an H2. Sometimes you'll have two H1s on a page. All of these things, they just happen on the web. And we have to deal with that. And it's not the case that there is any advantage of having a clearly structured headings on your site. It's not the case that there's a disadvantage if you don't have that. I think it makes sense for accessibility reasons to have a clearly structured heading set on your page. But for SEO, it doesn't really matter. So with that in mind, if you have individual parts of your site separated and you have clear headings for those individual parts, then that makes perfect sense. The thing I would kind of watch out for here that kind of drew my attention to this question is your focus or slight focus on keywords here. And headings are not a place where you just blindly drop keywords. They're really a place where you tell us more about the structure of your pages. So sometimes that includes keywords, if those are kind of the higher level elements that you're talking about on a page. But you shouldn't just say, oh, this is my list of keywords. I should map them all to individual headings so that Google thinks my page is important for these keywords. So that's kind of the one thing I would watch out for. When a site is not updated regularly, Google takes longer to discover new URLs two, three days can be very relevant to be able to show and discover. Submitting by new URL in Search Console is useful for many small sites. I definitely agree there. That said, like I mentioned way in the beginning, I think we should be able to pick these up a little bit better on our side too. So especially if you're a fairly small site and you're using a CMS, then ideally your site would be creating and submitting a sitemap file, which in turn should be able to tell our systems that, oh, we should check out this new and updated page that you have on your site so that you don't necessarily need to go out and submit things manually. You said duplicate content first gets indexed, and then the canonical is processed. In the recent search off the record, Gary was saying that while rendering the system processes the canonical tag for duplicate content, can you please confirm what is the real process? Will duplicate content get indexed first or not? I think there are lots of simplifications that we make when we talk about Search. And some of these are reflected in the aspects of which one of these steps is the first one that you take. And essentially, you can assume that both of these are kind of correct in different situations. So from that point of view, I don't think it's really something that you can rely on either way, where you say, oh, well, my page will always be indexed like this first, and then it's indexed like this. Sometimes we can follow a canonical link directly and just focus on the destination page and have that indexed immediately. Sometimes we can't focus on the destination page that quickly. So we first process kind of the origin page. So it kind of depends on the individual circumstances of a site. Is there any reason that images in image packs feature snippets and knowledge panels register an impression in Search Console for web search? Those images first link to Google Images and not directly to publisher sites. For Google's own documentation, only links leading outside of the search results should register as an impression. These images can really throw off the reporting for many sites and cause a lot of confusion with site owners. Yeah, I don't know. Occasionally, this question comes up. But I mean, from our point of view, we would essentially see these links as links to a site, kind of like how an image search, when you click on an image to expand it, we would also see that there. So that's something where I think because of the kind of intermediate step there where the image is first shown in a preview and kind of expanded there, that sometimes throws things off a little bit with regards to how you would count that. Because in Search Console, we don't have this notion of spike. You clicked on it, but you're looking at a larger preview first. And then you're clicking again to go to the site itself. So maybe that's something that we need to add there to Search Console. I don't know. I guess the other part of the question in general is, should we be counting these at all in web search? Or should we be essentially dropping all image links that are shown in web search? And I don't know. That feels like a pretty drastic step if we were to go down that route. But maybe that's something to discuss with the Search Console team again. I see many discussions about people buying expired domains to take advantage of links associated with the expired domains, what they do is build a site on an expired domain, or they redirect an expired domain to a second domain that they want to rank. Does Google reset the backlinks of expired domains so that they don't have an effect when someone buys them and builds a site on the expired domain? Now, I don't know. These discussions have been around since the beginning when domain names started expiring and people try to keep, I don't know, keep using some that had some good history and trying to build things there. I think initially probably just to keep the name alive and then at some point for SEO reasons as well, of course. And our systems try to understand what to do here. And for the most part, I think we get this right. So it's not that there's this any one specific factor that we would look at and say, oh, they're trying to do something sneaky with those expired domains. We need to kind of be super cautious. Sometimes people revive expired domains and they really kind of run the old website again. Sometimes people also pass on a website from one owner to another. So kind of domain name changes ownership. And those are all normal situations. And it's not the case that Google should go in there and say, oh, we need to be extra cautious here. So finding those situations where essentially people are trying to abuse the system by picking up expired domains that are totally unrelated to what they've been working on and hoping to get them to rank well on search, that's something that's sometimes a bit tricky. And we have a lot of practice with that. I don't think we always get it right. But at least the many cases I've looked into, there seems to be working out OK. But anyway, this has been around, I think will continue to be around as long as people can change ownership and kind of reuse existing domain names. It's kind of a part of the web. I think you're done with the questions. Can I ask one or do you have more questions? Go for it. All right, so let's go back to passage indexing since it didn't launch. And nobody knows what's really going to happen. So can we speculate a little bit more about it? I think last week or the week before you said that passage indexing is not really about a webmaster, SEO, or whatever you want to call these people today. It's not about them changing their content because generally you want to make sure your content is structured in a good way. You want to make sure the page is structured in a nice way. It's more about Google being able to understand content on a page that's not structured well. So it sounds like it's something really to optimize for that point because Google is getting better with passage indexing or ranking with the ability to understand the page that's not so structured well. And Google will be able to say, all right, this section on this page, even though it doesn't have a headline or doesn't have a certain content block that we can understand that this is about x, y, and z, now we can understand that this specific section is about this topic. So is that a better way to look at it as opposed to SEOs going about saying, hey, now I need to go ahead and make sure my pages instead of like, a lot of people have these hub pages where they have a topic about, let's say, SEO. And then SEO talks about AMP and mobile first and yada, yada. So that hub page might have a little subsection about mobile first indexing and AMP and so forth. So you might be able to rank that top level page with passage indexing for AMP, for search on AMP, as opposed to the deeper level page. So you can see how SEOs are kind of thinking about this. I guess two-part question is, one, is are you going to rank that top level page as opposed to the deeper level page? I assume that probably would not be the intent of this. And two, should SEOs really do nothing, based on what we just discussed? Oh, I mean, SEOs will never do nothing. They're always optimizing, right? And I'm regularly surprised by what people figure out and where they find ways to kind of do something legitimate that essentially is not what we expected. So I'm not worried that SEOs will sit back and just start doing nothing at some point. I think they're super creative and they're super smart people. But with regards to kind of focusing on a higher level page versus focusing on the individual kind of more detail pages, I think that's something that you kind of always need to look into and need to consider, does it make sense or does it not make that much sense? It's similar on e-commerce when you have a category page and you have a product page. You could optimize that category page to kind of be a little bit more focused on the individual products that you're trying to sell. That could result in that category page ranking a little bit better for those products as well. Is that what you want to achieve, or do you need people to go to your product page? That's something where, depending on your goals and what you want to achieve with your page, you can kind of tweak things in either direction. In general, if you have fewer pages and you make those pages stronger, I think that's generally a good strategy. It doesn't mean it's always a good strategy. It might be that you really need to spin off some things into separate pages and kind of talk about them in detail because that's what people expect. But if you can make fewer pages that are stronger, then there's kind of less technical infrastructure needed to maintain those fewer pages. And if they get the same amount of traffic, if you get conversions in whatever way that you want it there, then that's essentially up to you. Thank you. And I mean, one way to kind of look at this or to play with this a little bit is to just do AB testing and take some categories where you get enough traffic so that you can measure differences and kind of build out those category pages or those higher level pages a little bit more and see if that changes anything for you. If people end up going to those higher level pages more and if they go to those higher level pages more, are they doing whatever that you want them to do on your website or have they changed their behavior a little bit? And testing these kind of things, I think, is something that you almost need to do because there's no definitive answer that says one really long page is going to be better than a bunch of short pages. Yeah. Right. I mean, for users, I mean, obviously, it depends on the user. But if somebody is looking for a specific topic about, again, back to, let's say, just AMP around the SEO topic and how important is AMP for SEO, they might not want to try to find on the page where that section starts and scroll through it. Of course, I assume Google could be looking into that anchor feature with the modern browsers to target down to the page to highlight. But it just seems, I don't know, it just doesn't seem what a normal user would expect. I guess it depends. You have to test it, of course. So it used to be like SEOs would build out as many deep level pages as possible. Panda came out and that consolidated your content, make higher level quality pages, less lower quality pages. And quality is very effective. Of course, you can have very short content that answers the question that users love. So it seems like the focus, again, still goes to making these higher level pages that help answer the question. Like you said, less pages, more building up those individual pages as opposed to deeper level pages with less singles pointing to it. But that seems like an SEO answer as opposed to a necessary user answer. Yeah, I mean, you kind of have to balance that. And it's probably also something where marketing in general plays a role, where I don't know if you think back, I don't know how many years, like 10 years probably, when the fad of the really long kind of marketing landing pages came up. That's something where lots of sites did these kind of A-B tests where they did short marketing landing pages and really long marketing landing pages. And sometimes the long ones perform better. Sometimes the shorter ones perform better. And I think that's something that still applies to SEO. And the user demands, they kind of change over time as well, where if you search for AMP now, you kind of have the expectation that they understand what AMP is and how it fits in into SEO. But if you search for AMP way in the beginning, then probably you would first need to provide a little bit more context and give a little bit more of an overview before you dive into AMP and SEO, what the details are. OK. Wow. I think for some reason people are joining today, but whatever, it happens. Let me see if any new questions showed up. Otherwise, anything else from any of you is definitely welcome. Hi, Joan. I have a question. Hi. Hi. Yeah, my question is in regards to exact match URLs. I work with a very large number of small businesses in the UK. And so this is a question that we have very often. So I'd like to hear your point of view, and I guess if this is sustainable, if this is within best practices. So we have a client who, without telling us, he purchased a large number of URLs, which is location and product. For example, flowers, Burgess Hill, or flowers, he was here, which is pretty much where he's located. So what is this a good strategy? Is this against practices? Because I'm a little bit confused in regards to this client, who suddenly this morning sent an email saying, I just purchased, really happy. He said, I just purchased 15 new URLs. And I'm like, right. So yeah, I'd like to hear your point of view in regards to that. No. Thank you. So usually there are a few things that come into play with these kind of exact match domains. On the one hand, we may see them as doorway pages. If you have a lot of these domains leading to the same content, then it's possible that we will just say, oh, we will just index your primary content and drop those doorway pages. So it's not that there would be a disadvantage of having those. It's more that why you don't have any big advantage of using those. I think the primary aspect that comes into play there, the other thing is that sometimes these kind of domains can make sense for non-SEO reasons. For example, if you're doing local marketing campaigns, you might have kind of easy to remember domain name that you can point people at. You can make a sign with these, all of these things. And people can go there. They get redirected to your main site. And that's essentially fine. So I think, yeah, those are kind of the primary aspects there. With regards to doorway sites or doorway pages, usually it's something where the amount of pages or the amount of doorway sites that you're creating is really significant. If you're talking about 10, 15 sites, then I don't think even if the Web Spam team looked at that manually, they would say this is a big problem. If you're talking about 100 domains, then that's something where the Web Spam team might kind of look into a little bit closer and say, what exactly is happening here? I've seen cases where there are, I don't know, 10,000 domain names that were involved in this kind of a setup. And then that's really something where you spend a significant amount of money on all of these domains to keep them running. And if the Web Spam team says, oh, we will just index one of these, then that's a really big problem. But if you're talking about 10, 15 domain names, then it's probably not an ideal investment, but it's still not something where it's like you're completely throwing money out the windows. Sometimes it can make sense even just for non-SEO reasons. Brilliant. Thank you. Sure. All right. Let me take a break here. I hope you all found this useful. And maybe we'll have more watchers on the recording afterwards. Maybe we'll have more people join in the next hangout on Friday, I think. The next one is lined up. But thank you all for joining in. Thank you all for submitting questions and for asking questions. And I wish you all a great day until next time. Bye, everyone. Bye, Joan. Bye-bye. Bye.