 All right. Welcome, everyone, to today's Google Search Central SEO Office Hours Hangout. My name is John Mueller. I'm a search advocate at Google here in Switzerland. And part of what we do are these Office Hour Hangouts, where people can join in and ask their questions around search. And we can try to find some answers. Bunch of things were already submitted on YouTube, so we can go through some of those. But if any of you want to get started with the first question, you're welcome to jump on in. Hi, John. I hope it's OK if I start. I have a quick question regarding callback vitals. Does it make a difference if a callback vital is below the lower threshold, or is between the lower and the upper threshold? What I mean is if the callback vital is yellow or green, does it make a difference for the ranking beginning in May? I don't know if we've announced anything specific around that. My understanding is we see if it's in the green, and then that kind of counts as it's OK or not. So if it's in yellow, that wouldn't be kind of in the green. But I don't know what the final approach there will be, because there are a number of factors that come together. And I think the general idea is if we can recognize that a page matches all of these criteria, then we would like to use that appropriately in search for ranking. I don't know what the approach would be where it's like, there's some things that are OK, and some things that are not perfectly OK, how that would balance out. OK, will there be some kind of information before May about this? I suspect so, yeah. I mean, the general guideline is that we would like to use these criteria to also be able to show a badge in the search results, which I think there have been some experiments happening around that. And for that, we really need to know that all of the factors are compliant. So if it's not on HTTPS, then essentially, even if the rest is OK, then that wouldn't be enough. OK, thanks. Sure. Good morning, John. I had a question in regards to knowledge panels. I'm working on a client site, and I wanted to see what steps I can take to get a knowledge panel generated. Is there a certain schema, certain things I have to do to make that happen? We don't have any guidelines for how to enable a knowledge panel. Essentially, that's something that our algorithms try to pick up algorithmically, automatically. And that's something where we take into account a number of different sources of information to try to figure out what are the entities that are associated with this page and how relevant are they, and how should we be showing that in search? So it's not that there's a specific meta tag that you need to do, or a specific type of structured data that you need to add. It's more that everything kind of needs to align so that we can really recognize this page or this site is about a specific kind of entity. OK. No fantastic answer there, or no straightforward answer there. I know there are some people outside of Google who have been working all around the knowledge panel of things. Andy just linked to Jason Bernard. He's definitely totally on top of this, and he can probably give you a lot of tips on things that you can kind of work to make a line. OK. Thank you. I think someone raised his hand. Gregor? Yeah. Hi, John. I have two questions if it's not a problem. And the first one regards the duplicate content. Say we have a page that describes a bunch of cars, right? So a certain part of the description and content is the same. It is unique. We wrote it. And it's an information that users will find helpful. But how will Google look at that? Will it be some sort of negative score or something? Well, can some? OK. So with that kind of duplicate content, it's not so much that there is a negative score associated with it. It's more that if we find exactly the same information on multiple pages on the web, and someone searches specifically for that piece of information, then we'll try to find the best matching page. So if you have the same content on multiple of your pages, then we won't show all of these pages. We'll try to pick one of them and show that. So it's not that there's any kind of negative signal associated with that. In a lot of cases, that's kind of normal, that you have some amount of shared content across some of the pages. So really, the case, for example, is with e-commerce, if you have a product and someone else is selling the same product, or within a website, maybe you have a footer that you share across all of your pages, and sometimes that's a pretty big footer. And technically, that's duplicate content, but we can kind of deal with that. So that shouldn't be a problem. Yeah, great. So the second question would be about languages. So we have, say, English and Spanish, and we have the same article, but it's translated. And we have great rankings in, say, Spanish speaking area. And will that good ranking work for us so we get a better ranking in the US or English speaking area? Not automatically. We treat these as different pages, and we will try to rank them individually. However, what usually is done when you have a localized copy of your content is that you link between the localized versions. So you would link from the English version to the Spanish version, from the Spanish version to the English version. And based on these links, we would be able to distribute some of the signals associated with that good page with that new language version that you also have. So it's not like there's an automatic kind of your page in English will rank just as well as your page in Spanish, but some of that effort that you put in, if you link between those versions, that will be associated there. It's also the case that sometimes the competition in different languages is just very different. So you might have a very strong page in Spanish, and the English version is a much more competitive market. Then even if we forward some of the signals to your English version, it might be very different with regards to the competition in the search results. OK, thank you very much. Hi, John. I have two questions about video. So when we add an image to a website, we usually use all tag. We use title tag and caption so that Google can understand what is this image about. What can we do for video? If I upload or embed a video in my content, what can I do? I don't know the names offhand, but for video, we do have a type of structured data that you can use. And it also has fields for things like descriptions and title, I think, for videos as well. So that's something you can definitely use. But the more practical thing is kind of like with images as well, if you have a caption right next to the video, if you have a heading on that part of the page, all of that kind of applies to the embedded content as well. So that's something that doesn't always require a special markup or special attributes to use. Second question is, is there any difference uploading a video or embedding a video from a CO point of view? Like, if I embed a video, it means I am pulling the video from other website. But when I upload it, it means it is on my website. It means it is my content. So does Google differentiate in this way, or it's always the same from a CO point of view? It's essentially the same. Like, it's very common that you have a separate CDN for videos, for example. And technically, that's kind of a separate website. And from our point of view, if that works for your users, if your content is accessible properly for Google for indexing, then that's perfectly fine. Thank you. Cool. A bunch of people also have their hands raised. John, I just wanted to follow up a bit on that duplicate content question. Obviously, what you said is perfectly accurate. But I think there are some cases where Google doesn't automatically merge two pages together for various reasons, doesn't have enough signals, or signals are conflicting. So in that case, both pages might actually compete among one against each other. And in that case, while there's no specific penalty or any algorithmic factor to pull the site down, you're still at a disadvantage because you're trying to compete for the same thing with two pages. So I guess it's still worth to try to make sure to set up signals so they do actually economicize properly without hoping that Google picks that up automatically. Yeah, definitely, yeah. John, can I just clarify that video question? Sure. I thought, unless my knowledge is a couple of years out of date, that if you used YouTube as your host for the videos, and you did a Google video search of things, then YouTube would be the source and not your page. Whereas if you use Vimeo or something else, then actually your page shows as the source. Is that not still the case? It depends. Oh, excellent. It depends. The standard answer, sorry. But I mean, with YouTube, you essentially have two video landing pages. You have the landing page on YouTube, and you have the landing page on your site. And we kind of have to figure out which one of these pages to show. And it can happen that we show your site as the video result landing page, just because we have more information there, perhaps. It can also be that we show the YouTube landing page because maybe we have more signals or more information there. So that's something where it's not automatically the case that we would show the YouTube landing page. And some other video platforms also have their own landing pages that they create automatically. Some video hosting platforms don't do that at all. Essentially, that's kind of up to you there. So essentially, if you do it on something like Vimeo, which doesn't have a public facing, all the difference is you're not creating a competing video when you upload it. Whereas with YouTube, if it's public, you're creating your own and the competing YouTube version. And therefore, you're competing with YouTube. Right. Yeah. I guess that's one thing that could play into that. It's also, I mean, people sometimes search for videos on YouTube. So if you want to do that, that might be kind of something you'd want to do anyway. But yeah, it is something where you kind of need to think about that a little bit as well. And it's not 100% the same if you have the same video file that you're hosting yourself or that someone else is hosting for you without a landing page or you essentially have two video landing pages. Sometimes you also want to have two video landing pages because it's like this slightly different content or it attracts slightly different audience. OK. Hi, John. Can I ask something? Sure. OK. So for example, there's a website that is getting a good amount of search traffic. Now, I just wanted to know if the same website is getting traffic from some other channels like direct traffic, social traffic, or some traffic from other blogs. So that non-search related traffic that a website is getting is some kind of positive signal for Google like people find that this website is getting a lot of traffic from direct or social channels. So we should push this website in search also so that searchers can find it easily. We don't use that for SEO. So I think it's great as from a website point of view to diversify the different sources of traffic so that you have different places where people can find your site, different places how people can get to your site. But we don't use that for search. So if you're active, I don't know, on Facebook or on other social media channels and you get a lot of traffic there, then that's kind of a way to balance out some of the uncertainty maybe around search or around discover or around some of the other channels that you also use. Sorry. So I think that diversification makes sense. It's something to work on and it helps to make it a little bit safer with regards to your presence online, but it's not something that we take into account for search. Not directly or indirectly. Like for example, a brand suddenly starts getting a lot of direct traffic. So there's no impact on search performance. No. I mean, indirectly, you might see something if people go to your site and they think it's fantastic and they share it with other people and we pick that up as a link. Then maybe we could take that into account, but not directly. Sure. Sure. Thank you. OK, I see a bunch of people raise their hands and I just found a place where I can see the names. So I'll just go through from the top of my list here, Benjamin. Hi, John. How are you? Really long time listener, first time caller. It's an honor to be here. I remember the mat cuts days, but thank you for letting me in. I'm writing it or I'm calling it because I purchased a website that has been around since 1998. It was owned by the same person till 2018. Can I paste it in the chat? Sure. OK. It got purchased and then from what I can tell from all my SEO research, it got possibly turned into a PBN potentially or just a really bad website. I purchased it and I've done everything I can to turn it around. I got in touch with the original owner, stalked the internet, found him, been emailing him back and forth, asked him why he's so little. He just said he got too old for it. Updating all the content, added new UI, added social profiles, which didn't exist. I'm seeing slight movements in traffic, which leads me to my question. I'm investing a lot of time into this. Money too, but it's just really my time and I'm putting my heart and soul into it because I do think that this site needs to exist. I haven't found much on this topic. Am I wasting my time? Does Google say, oh, this was maybe potentially a PBN or they did spam me back? Oh, I did disavow as well. Am I wasting my time? Can you talk to the point of, are sites able to be turned around, especially under new ownership, new who is information, new domain, new host as well? Yeah. I think the short answer is yes, you can turn it around. In general, the most common issue that we find is that maybe there is a manual action involved and that's something you would see in Search Console. And I assume, yeah, I assume you checked. And if there's no manual action, then essentially it's kind of a normal website. It's not that we would take into account any previous manual actions. Like if some owner in between had a manual action, that's not something that we would take into account. So essentially, the way the site is now is the way that our algorithms are looking at it at the moment. And if you work to improve the site and if there are other issues associated with the site that you work to improve, then that should improve over time. There's no black stamp on it forever type of thing. Yeah, OK. Cool. All right, thanks, John. All right, John. Andrew, let me just run through the names as I have them in the list. And then after that, I'll try to get through some of the submitted ones as well. But just to kind of make sure that we follow who all is in the list. Yes, John, hello, thank you. Well, my question is about the situation when a site has an app connected to the site and for mobile users, browsers offer them to download the application, so showing some kind of banners from Google Play or App Store. This offer, as you may know, appeared at the top of the site. And inevitably, it creates a layout shifting. And it's also hard to fix this shifting. And I wanted to ask, what is your opinion on this matter? Is Google OK with these situations? I assume you're specifically asking about the cumulative layout shift from the four replies. Sure, yeah. So for that, I would focus on the metrics that you can pick up from the testing tools and from the real user data that you can pick up for the site. So it's something where we don't explicitly say this specific kind of pop-up or banner is OK or this specific banner is not OK. It's really just you can test it and see what number comes out. And maybe there are things that you can do to improve that so that the number looks better and where you can also look at the number and say, this is OK for me or this is something I need to improve. Well, but they do create this shifting. And well, I don't know. You can't control them because it's not on your site. I mean, as I understand, it's not, well, it's browsers. They just offer those. I can't like, I don't know, just close them for forever. But well, it's an option as well, a good option. Yeah, I don't know how exactly that looks on a browser. So it's hard to say, but it's something where if you see that this is a problem for some of your pages, maybe there are ways that you can do to make it less visible on the important pages and maybe limit it to other pages or do something such as allow the user to click on a button and then trigger this in the browser, something along those lines. OK, OK, thank you. Sure. Lee, I don't know if you asked before or already or still waiting. Hey, John. Real quick, just two quick questions here. Piggybacking on the question about page speed earlier in core web titles, I work with a company and they have a very authoritative page. We're more authoritative than our competitors that are outranking us for a number of keywords. And that has to do with some other factors that I'm dealing with right now. But we do have a slower page significantly. However, I would say objectively, we have a more valuable experience. We offer a better product, that sort of thing. And we get more traffic than these other pages that we're ranking against. So my question is, is page speed used now as a significant ranking factor that would outweigh the content that we're providing and user experience and that sort of thing? And will that get to be more of a factor come May and that sort of thing when core web titles rolls out? We use speed on mobile at the moment. So that's something that kind of plays in a little bit there. And in May, I think the idea is to kind of revamp how we look at that. And maybe, I don't know, my guess is that we will use that a little bit stronger as a signal. However, it's still the case that when we can recognize that someone is looking for something specific or a page is particularly relevant to users, then that's what we plan to show. So for example, if someone is looking for your company, we wouldn't show some other page just because it's a little bit faster. And that kind of means that if you have a really strong page in your topic area, then probably that will be OK, even if it's a little bit slower. But no, it's really hard to kind of say this is where the line will be drawn and this much is not strong enough or anything like that. Cool. Well, that makes sense. We're working on a number of things. So I'm not super worried about it, but it is one of those things where our engineering team is sort of backed up and working on getting these. We have a very large website, so that's obviously one of the things that we're working on. My second question is about just subdomains. So my company is a blog that operates on a secondary subdomain as opposed to a subfolder. And I'm just wondering, as far as ranking signals and stuff from Google, is that viewed as another domain entirely, or do we have authority that's going back and forth between the two? Obviously, we're linking between the two effectively and stuff. It depends. So in some cases, we will see this as part of the main site. In some cases, we'll see it as something kind of separate. So for the most part, I wouldn't recommend just changing this just blindly and hoping that you'll get a big advantage out of it. But rather, if you're thinking about consolidating things on one domain, if it makes more sense from kind of an infrastructure or tracking point of view, then I think all of those things kind of point more in the direction of using subdirectories. But sometimes it's also the same that kind of applies to subdomains, where you say, infrastructure-wise, it's easier like this. Maybe you're using some CMS that needs to be on a separate subdomain, all of those things. So hard to say. All right. Thanks so much, Sean. Sure. Antonio. I'm honored to ask you directly. Thank you so much. Question. I have a page that's ranked for dozens of words in the top positions. And for a search or a group of search, I'm honored for the second page of the results. The best thing to do is to include these terms and talk about it on the same page or create a specific page for those terms and link to do it. I have a good experience with both options. And that makes me really doubtful. What's your recommendation? If you want, I can do an example for explain better my question. Yeah, I totally understand the question. And I don't think I can give you an absolute answer. Because like you said, sometimes it makes sense to separate things out on multiple pages. Sometimes it makes sense to concentrate on a single page. I think it depends a lot on the user expectations, maybe also the competition with regards to would users be confused if they went to one general page and they expect a kind of a specific page? Or would users also be OK with a general page that is ranking? For the most part, I think fewer pages make sense because you can make stronger pages. They're a little bit stronger with regards to the competition. But at the same time, if you're specific about a certain topic, then that's a very well-targeted page. So finding that balance, I think, is really hard. My usual recommendation is for you to try it out and test it out and see what works well for you in your specific situation. But thank you so much. No, I mean, it's not like a perfect answer, but I hopefully give you some ideas. At least there's no absolute answer. I know I have this problem many years, and I will not know exactly to do it because it's Google. Thank you so much. You are incredible. Thank you. Stefan. Good morning. So let me set the stage real quick. I work for a fairly large company, and I have similar subdomain duplicate content challenges as Lee does. It's not exact duplicate content because the context is slightly different. There's a purchase aspect on one subdomain and a stream aspect on a different one. So we've been trying to tackle some indexation issues where one subdomain actually has a lower traffic potential. We rank typically less well and so forth. So as I'm looking through GSC, one of my site maps, looking through the excluded selection, and so for one of the details, the duplicate submitted URL not selected as canonical, there's a huge number of URLs with that flag. And so as I started to look into some of the examples and then use the inspect URL, I noticed that almost invariably there was no URL included in the Google Selected Canonical. So I submitted a help article on the community, and I got kind of a boilerplate answer. And I was looking as I was listening as we started the call, and it looks like that issue is fixed. So invariably, it seems like the URLs are now populated, but most of the examples that I've found are completely off base. So though they have nothing to do with the actual query that the page should rank for. So my question is, I guess the first question is, how should one interpret the Google selected canonical NA as a value? I don't know. I think I saw something similar on Twitter recently, and I think that was a bug on our side. So in particular, if we index a page, then we always have a canonical. And if the tool says there is no canonical available, but it's indexed, then that would be kind of something conflicting on our side, not necessarily something actionable on your side. On the other hand, if it's not indexed, then it would be kind of normal that we say, well, we don't have a canonical associated with this URL. So it's because it's not indexed, essentially. Yeah, well, the report I'm looking at would be index pages only, right? Or at least I've confirmed that they're indexed. Since we configured GSC for separate subdomains, my hypothesis was that the alternative subdomain was actually ranking. And you would just simply not include that URL because it's not on the same subdomain as Confiliate GSC. No, I think, I mean, I don't know what the specific bug there is, but it should be the case that even if it's on a different domain, if we say a different URL on another domain is canonical for this one that you have, that we should show that. So it shouldn't be the case that we would just say, oh, we don't know, but actually we do know, but we just don't want to tell you about it. So that seems more like a bug on our side rather than a sign that we use a different subdomain for that. OK, well, it looks as though the bug may have been fixed, but you may have introduced a new bug. Oh, no, OK. It can seem completely off base from the topic that should be canonicalized. OK, if you can drop some examples maybe in the chat here, I can pick that up afterwards and pass that on to the team. Can do. Thank you much. Sure, thanks. Angie, I think you also have your hand raised. Hi, John. As always, thank you for doing this. So I'm doing a site move for a client, and I had a few questions about that. So we're changing the domain name. So they want to kind of present the brand as more modern, so just changing it to the acronym of the full name. And we're not removing any pages, changing any content on pages or anything like that, so it's going to just be doing the redirects and then using the change of address tool and search console and et cetera. So pretty straightforward, but I was wondering if everything should be redirected, so the sitemap and robots.txt specifically. Is it recommended to leave both the old sitemap up and the new sitemap up so that Google is able to find all the redirects to the new site? Yes, OK. And then for the robots, I saw that there was, in the crawl stats support document, I believe, it said that like 200s and 400s were successful robot responses and 429s and 500s were unsuccessful. Did not mention anything about redirects, so is it recommended to leave that not redirected so that Google is still able to crawl from the old site? Or? I don't think it matters with regards to the redirect. I think in our site move documentation, at some point, we had a recommendation that you have an empty robots.txt file on the old domain just so that we can crawl all URLs and then find the redirect to a new one and recognize, oh, it's now blocked, or it's a 404 or whatever. My guess is that's more of kind of like super small optimization with regards to crawling and not really necessary. So in particular, sitemap file and robots.txt files, they're files that are more like control files and not indexed anyway. So if they do or do not redirect, it doesn't really matter. With regards to all of the indexable content, that is something that is kind of important for us in the sense that if we can recognize that a site is only partially moving, then we'll kind of have to shift into a mode of like, oh, we have to figure out what actually changed here. Whereas if we can recognize a site has completely moved, all of the indexable content has moved, then we can shift into the mode of like, oh, we'll just transfer all of the signals in bulk to the new domain. OK, yeah. So we are doing like a one-time switch, so we're not moving in sections. So it'll just literally be like flipping the switch. And I guess the signals will start processing, will be processed right away. OK. Ideally, yeah. OK. And then actually on the note of like site signals, because I know that these days, we talk about how pages are being ranked rather than domains as a whole, but how much of a factor are the site signals? Like what I'm trying to ask is basically, say everything, say we do everything perfectly, we're not changing anything on the site either, so everything should be pretty much the same. Once Google's finished reprocessing everything, can we expect to have essentially the same sort of performance and everything as we had before? It's going to be on the same server as well. So the only thing that's changing is essentially just a domain name. Or what about new pages that go up on the site after the change? Because it's a new domain that has had no previous history, could new pages be a little bit at a disadvantage? Because Google has less site signals, or I guess basically how much of the site signals have to do with that. I think in an ideal situation, all of that will just transfer, and it'll just continue working just like before. So we've been doing a number of analysis internally on site moves in particular, just to make sure that we're not missing anything, that nothing kind of goes wrong with most of the site moves. And for the most part, we can see that it clearly just completely transfers to a new domain. So if there is nothing kind of weird associated with the new domain, and the website really does a clean move, then that should be completely fine. I think it's always something where you don't completely know what to expect, because it's a different domain. You can't really try it out ahead of time. So it's always a little bit of uncertainty involved. So my recommendation there is just to make sure that you're really tracking all of the details, that you have a big spreadsheet with all of the checklist items to double check. So that should anything weird happen, you can be certain that we have all of these basics covered, and it's really something kind of obscure that you couldn't have known ahead of time, or otherwise you can go through the list and see, oh, I forgot to set the hreflang, or I forgot this specific thing. And then you can explicitly go in and fix that. Okay, perfect. And then my last question was, so in the site move documentation, it does mention that we should expect rankings to fluctuate. And I was just kind of thinking about the process of how Google would reprocess those pages, and how that would actually affect the ranking. So for example, like is it more, if I have a page that on the old site that's currently ranking for position one on a certain keyword, if we do the, when we do the redirect, when Google discovers it, is there a period before, I guess, the crawling and sort of re-indexing happens where it would actually drop out of the search results before being replaced by the new page, if everything is good, or is it, if it crawls it and reprocesses it immediately, or does the old page just stay there until the new page is kind of reprocessed? Yeah. So essentially what happens there is we switched to canonical. So we would have the old page index, we would start seeing the redirect, we would follow the redirect, see that the same content is there, and then our systems would say, oh, it looks like it moved to this new URL, and we would just switch that out. It's not the case that it's like it would fall out first and then be re-indexed again. It's really kind of like, oh, we see this connection, we see both of these pages, we can just switch it over to the new one. So it shouldn't be the case that there is a hole with regards to traffic, but usually with all site moves, you have this period of, there's some things associated with the old site, some things associated with the new site, and it takes a little bit of time to shift the majority of things over. In a lot of the site moves that I looked at for double checking, this is, I don't know, a period of a couple of days, maybe a week or so, where it's just kind of like fluctuates a little bit until it settles down again in kind of a similar state with the new domain. Okay, thank you very much. We have tons of stuff submitted as well, but it seems like, I don't know, with the hands and such, it seems to be working fairly well, so I'm going to focus on questions from here and maybe add some comments to the questions that were submitted so that those don't completely get lost. All right, I think Kadu, if I get your name right. Yeah, it's all right. Thank you, John, for this opportunity. My question is more about web vitals, because we have an education platform and many of our requests are made from logged in users. And basically, we treat the logged in users different from our logged in users. The same page, we will load a bit more stuff a little more stuff for logged in users and this makes the page much more deficient on web vitals. And my question is, how concerned I have to be about my logged in users in this page that we serve for them? If you have the same URL that is publicly accessible as well, then it is very likely that we can include that in kind of the aggregate data for core web vitals, kind of in the real user metrics side of things, and then we might be counting that in as well. So if the logged in page exists in a public forum that we might think some users are seeing this longer page, perhaps, or kind of more complicated page, and we would count those metrics. Whereas if you had separate URLs and we wouldn't be able to actually index those separate URLs, then that seems like something that we would be able to separate out. I don't know what the exact guidance here is from the core web vitals side though. So I would double check, specifically with regards to Chrome in the crux health pages to see like how that would play a role in your specific case. Thank you. I'll see. John, how's it going? Wonderful. Good. Pretty good. Okay, so I have a question kind of along the lines of some of the subdomain questions that people have been asking. So if, because you kind of said like, I think if I heard your answer correctly, there is some times when that can make sense. So I'm wondering if this situation makes sense and what are the potential risks involved here where the site is gonna exist in two parts. One is like the content focused part of the site and then the other is the e-commerce focused part of the site. And so you would have shop.mysite.com and then your e-commerce side on the subdomain. Would you recommend a couple of questions? Would you recommend that strategy? What are the risks involved? And then how much GameStop stopped for you on? Oh man. I don't know about the last one. That's so weird, such a weird situation. Throws me off completely. Yeah, so I think to kind of back up a bit, we regularly talk with the search leads about subdomains versus subdirectories and tell them like, oh, SEOs are so obsessed about subdirectories and such. And they always tell us like, they should just use whatever makes sense. Like our system should be able to deal with subdomains and subdirectories essentially in the same way. So I think there's some more second order effects that some people are seeing there, but I don't think it's something where I'd say like you will automatically have a bonus if you go to subdirectories or if you go to subdomains kind of thing. So if you have your shop on a separate subdomain and that's what works well for you with regards to tracking and all of that, then I would try to keep that. I don't see a problem with moving that over or trying to find a way to do like a reverse proxy and move that to a subdirectory. So you think they can, because I mean, I guess always the concern, right, is the two separate site situation and trying to rank because you want both to rank et cetera. The problem that we have here is a technology problem and the customer wants to use Shopify for the e-commerce side, but not for the content side, right? I think Shopify can't install on a subfolder. So you feel that it still could work just fine. Would Google decide to link those and consider them one site or are they always going to be considered two sites that no matter what? It depends a little bit. I think in a situation where you just have two subdomains then probably we would treat those as separate sites. If you have a lot of different subdomains then we might see like, oh, like they're using wildcard subdomains as categories, for example, then that would be a clear sign that actually this is one site. We should treat that as one thing. I think in a case where you have content on one side, one subdomain and the shop on the other, I really don't see a problem also with like competing against each other because usually people come with one intent and if they want the product, they'll find the product pages. If they want information, they'll find the informational pages. It's not that these are competing against each other or otherwise kind of like annoying each other in search. Okay, cool. Thank you. What was the other question? It was a game stop. That was it. Oh, okay. Okay. Whew. All right. Thanks. Eric. Thanks. Can you hear me? Yes. Okay, hello. Hey, thanks for having me and thanks for doing these sessions. It's great. It's really good. I have a few questions regarding search. Okay, so first of all, we had a review of iPhone that came out and for some reason it ranked lower than our first impressions article that we did that covered reviews from other magazines all over the world. But we are really strong locally in our country and we had a review like really, really good review and it was below our article that covered the first impressions. After like one or two months, we noticed that we were missing the review schema with the rating and stuff and the price. So we added this to the article and then after a couple of hours, it ranked above the first impressions article that we did before. Is that possible? I don't think that would be directly related. So in particular, the rich results or the structured data is something that we use for the rich results in the search results. It's not something we would use as a ranking factor itself. What might have happened is that you changed the page enough that our system said, oh, we need to reconsider how we kind of index this page. And then based on that, we kind of reconsidered the ranking but that shouldn't be related to adding structured data through a page essentially. So it's not the case that every review needs to have a review structured data on it. That's what I thought. Okay, so, and also with this, do the missing fields in schema or in the review type to the missing fields like brand, Q description, aggregate rating and ISBN, do these matter if they're missing? Because I see that the Google only requires the three, the price, the name of the product and that's what I forgot, the rating. Yeah, so if you have the requirements covered then we would show that in the rich results and the rest are really more optional. It's not, at least as far as I know, it's not the case that we would rank things more if you have more of the fields filled out. Okay, okay. And do that as good as Google. I know that you and search results that Google only uses like, you know, all the reasons that find content on the site but do the meta tags description and keywords matter at all for the results? We don't use the keywords matter tag at all. So that, like you can do whatever you want with it. I don't think any search engine uses that anymore. The description we use as a guide for the description or the snippet that we show in search. It's not always purely from the description meta tag but for a lot of cases it is something that we use there. So if you have a specific snippet that you want to have shown then definitely make sure that's in the description but we wouldn't use that as a ranking factor. It's more that you're showing users what this page is about and maybe they will click through more if they understand it a little bit better but it's not that it would rank higher because of that. Yeah, I see all right. And we received an email actually yesterday and today for our web page for search console that you launched a new service or something that grouped all the subdomains and subdirectories and HTTPS into one domain. And so we verified the domain and everything and we still don't see the data that we see for our HTTPS www site. Will this show up eventually? Yes. The data sometimes takes up to a week to be completely visible in search console so some features a little bit faster, some features take a little bit more time but that's kind of normal that it takes a little bit of time to be visible. And I think with this message in particular we sent it out when we noticed that sites are not completely verified in search console but maybe you have traffic to the HTTPS version and you have HTTP verified. That's something we just wanted to make sure that people are aware that they're not looking at the full picture. So with the domain verification that's all kind of covered automatically. I see perfect. And we also noticed that our site maps were not indexing properly or loading properly in search console and it was fine for a few years. I mean, no problems but like last week we noticed that for some reason the search console couldn't load our site maps from the site map index. We have an index file with all the site maps, the parts and it loaded most of them but the ones that metered the most, the posts they just couldn't load it. So is it something that we can do something about? I would probably post about that in the help forum with a specific site map URL so that someone can take a look there and if it is something that's more on Google site then usually the folks in the help forum can help to escalate that to us. I think it's a bug because we added some parameters into the URL and it loaded fine so it shouldn't be better obviously. Okay, and then last question, sorry for that. Then we had a review for AirPods Max. We were the first in the country to have this both the product and the review and I just don't know why we still don't show up in the results and it's really annoying because we spend a lot of time and effort into the product review and we don't show up in the results. So can we sort it out? Yeah, I don't know. I can't make you rank automatically higher. Of course I will. So the one thing I would watch out for is if the page is indexed or not, if it's not indexed then usually that's more of a technical thing that you can work to improve. But if it is indexed and it's just not ranking the way that you want, it's really hard for us to kind of like say this is a big problem or not a big problem. So what I would also do there is maybe post in the help forum and kind of the specifics of the URL that you're using, the queries that you're looking at, especially if you're looking at a specific country, maybe those details as well. And they can take a look at that there and escalate that to us if appropriate. But a lot of these ranking questions are really tricky because it's often like which page should we show. It's like everyone wants to be first. Of course. OK, thank you very much. Thank you and have a good day, all of you. Thanks. All right, Mark. Yeah, thanks, John. This is more like a European focus question. We have a cookie content layer. Obviously, it's AAB conform, sending out the correct strings and everything. So my first question was, I think if I'm not mistaken, Google mentioned that loading layers like these will be recognized and they will not be added to the page speed index. However, when I do check the domain and the page speed index inside tool, it is recognized. And it's meant or it's found to be improvable. However, we use probably the fastest CMP on the market. So that's really nothing to improve here. Just a question, will Google go back and take that out of consideration, also with reference to May? Or what do you think? Essentially, we'd be able to take that out with regards to indexing the content. But from a speed point of view, we would not differentiate between this kind of banner or any other kind of banner. So essentially, from a speed point of view, it is something we would see as something that users would see when they go to the page. And that would be taken into account there. So I don't think, at least as far as I know, that we would have any logic to say, oh, this is a proper cookie banner. Therefore, we will ignore that it's slow or that it causes layout shifts. Well, there's this list available on officially registered CMPs, right? So we're doing something that's legally obliged. And we're just wondering, because in theory, a competitor who's not using any of those layers might have a page speed advantage. And when page speed is becoming more and more important, there's just more like out of curiosity how you deal with that. Now, my understanding is we would see that as a part of the page. So that would be something, if it's slowing down the loading of your page, then that would be something we would take into account and say, this page is slower than maybe the same page without that specific setup. But I think another aspect that plays into that as well is that you're competing with sites that have similar setups. So it's less a matter of we will not show your site at all in search. But essentially, other people have the same kind of struggles and need to figure out the same kind of thing. So it's kind of like you're competing with other sites that have the same problem. So it makes it a little bit more even. But obviously, in the situation that you mentioned, where if someone is, let's say, let's call it rogue and doesn't do any kind of banner at all, then that would be something that we would say, well, maybe this is a faster page, but maybe the other page is a little bit slower, but it's a better page or a better fitting page for the user. And then we would still rank that there. So it's something where we do take speed and usability into account for rankings, but it's not the only thing. It's like the content is still, by far, I think the biggest aspect there. OK, thank you very much. Sure. Raymond, let's see. Hi, John. So I also posted this question to the comments for the video. We have a site with a mega menu that has over 1,000 links. It used to be that this mega menu back in 2018 would only load upon user action. So when a user is covered above the nav bar, a page acts called load those links. At some point in 2018, we added those static links. And I know correlation isn't causation, but around that same time, we saw a big drop in our search graph. Now we are now contemplating removing these links from our nav bar as static links and going back to links that only load upon user action for the page acts call. We are nevertheless retaining a clear path to all of these links on relevant pages. So we'll still have a clear crawl path for those links, but only on relevant pages, rather than having these links on every page, you know, on thousands of mega menu links on every page. So we're wondering, what could possibly be the impact of ramifications of removing these 1,000 mega menu links as static links, even though we are still retaining a crawl path to all of these links on relevant pages? Yeah. I think on the one hand, it's kind of hard to say, because I don't know how the rest of your site is structured. So if, for example, these 1,000 pages are all of your pages, then it would be very different compared to, say, these 1,000 pages are your categories and you have a million kind of subcategories or something like that. But in general, what you're looking at from a change point of view is going from a more of a flat site structure to a more, I don't know, deeper site structure. I don't know what the official name is. Is it between the side or the other side? Yeah. And that's something where sometimes I can definitely make sense. So it's something we sometimes see folks kind of obsessing about limiting the crawl depth, for example, and trying to make it so that Googlebot can crawl to all pages in a very quick time. And to some extent, I think that makes sense. On the other hand, kind of more the top down approach or pyramid structure helps us a lot more to understand the context of individual pages within the site. So in particular, if we know this category is associated with these other subcategories, then that's the clear connection that we have between those parts. And that definitely helps us to understand how these things are connected, how they work together a little bit better. Whereas if it's very flat, then we think, oh, all of these are equally important. And we don't really know which of these are connected to each other. So from my point of view, I think for a lot of sites, it makes sense to have more of a pyramid structure. But at the same time, you don't want it to be such that you have to click through a million times to actually get to the actual content. You need to have, I don't know, some reasonable number of clicks, essentially, to get to the content. I see. Now, we are trying to be on e-commerce sites, so there is a case that we made first, having mega menus. However, we just wanted that not to show up as static links that would get crawled, but still make it available to a user for a user-in-use action, rather than just having them there as static links. But we still are all looking to have a siloed site with a more kind of a pyramidal structure, rather than something that has, where we have, indiscriminately have 1,000 links on every page. So, yeah, we've been, we've talked to a number of SEO firms, and they told us, well, if you do this, then it can be greatly negatively impacted. So, all of a sudden, these 1,000 links that you had off, all these pages are gone. And it's not really true, because we're trying to retain those links, but only on relevant pages. But yeah, then we can get, we can talk, don't do it, because if you lose all these 1,000 links to all the pages, it's down to have a negative effect. And I'm wondering if it's an empirical statement like that could be made. I don't think it would always have a negative effect. I do think if you make it too deep, then that makes it harder for us to crawl and harder for us to pass the signals around. But it's not the case that a super flat structure is going to be better than a kind of reasonable pyramid structure. So, personally, I would try to aim for more of a pyramid structure just to make it so that it's easier for us to understand the context of the individual pages and to forward the signals into kind of related areas easier. But it's also something which is a very significant change on a site like that. So it's something where I understand that it makes sense to kind of get more input on the options, maybe even to test things out. Like you take one category and say, I'll try it here and see what happens with regards to crawling, with regards to indexing, with regards to ranking. All of these things because it is quite a big step when you change your site from kind of like a super flat layout to more of a pyramid. Cool. Let me pause the recording here maybe for a moment. I'll be here for a little bit longer so we can still answer some of the questions that are remaining. But if you're watching this recording, thanks for watching along. I hope you found this useful. And I'll be setting up the next batch of Hangouts probably on Monday or so, so for next week and the week after that. All right. Thanks a lot. And let me just pause.