 All right. Welcome, everyone, to today's Webmaster Central Office Hours Hangout. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hours Hangouts, where webmasters, publishers, SEOs, can jump in and ask any web search related questions that might be on their minds. As always, if any of you want to get started with the question, feel free to jump on in now. Otherwise, I'll have to work back to the 5 million questions that were submitted already, which is not that bad. Wait, not 5 million. OK, let's see. The first question I see here is a link for participation. So I usually post those to the comments and to the video. I think maybe this time there's a little bit of a time delay there, but usually they're at the visible there. So let's see. Two weeks ago, our website lost 94% of Google's traffic overnight, with consistent search traffic for the last 20 years and no major changes. We assume I do something technical. Could sharing IPs or SSL via a CDN-like cloud for there cause a big drop in traffic algorithmically? We dug into deeper and found some sites with extremely risky content on the same IP. We have a cloud for a change of IP and have a dedicated certificate, however, we'll spill down in traffic. What could be happening here? So I didn't have a chance to look into this site directly, so I don't know exactly what has happened there. But in general, just because other sites are hosted on the same IP address, generally isn't the reason for concern. So in particular, with larger hosters, shared IP addresses are fairly common with CDNs. Shared IP addresses are extremely common. And it's something that changes because a lot of CDNs have endpoints in different countries and they kind of share those endpoints with the different websites that are active there. So essentially, user in Germany might see different IP addresses than the user of US, for example. But in general, this is an extremely common practice sharing IP addresses and is not something that will be problematic. In the way early days, this was something that sometimes was very useful to recognize big spamming hosts, where we saw one IP address is and 9,000 sites were on this IP address and they're all spamming. And if there were two other websites on the same host, then that might be tricky for us to realize. Are these two sites really two completely separate sites compared to these 9,000 other sites? That's kind of a tricky situation for algorithms. But in most cases, like this, we'll see a mix of all kinds of different sites. So different sites in different languages for different countries with different target users, some spammy sites, some non-spammy sites on the same IP address, and all of that is perfectly fine. So that wouldn't be a reason for us to say, oh, because there's one spammy site on this IP address that that would be a problem. So I don't know specifically what has happened here with this website. I'd have to take a look into that. In general, also, kind of the aspect of our website was doing well on search for so many years before. I tend to say that's good on your side, but that's not necessarily a sign that we would always keep it like that. So just because a site was doing well in the past in search doesn't mean it'll continue to do well in search. On the one hand, user expectations change. On the other hand, Google's algorithms change it. So these things can always change, and it can happen that things sometimes change significantly for the website. And our goal there is less to say, like this one particular website is bad. We are algorithms, but rather to say, we've recognized that maybe we've mis-served user expectations, or we've been doing things in ways that don't match what users expect anymore. So our algorithms have changed to try to bring results that users aren't relevant nowadays back to the search results. So that's something that can always play in, depending on the type of website. How can we get the algorithm to know that our site is the original creator of a content stolen and modified by another website? So there's a forum thread link that I haven't looked at to see what exactly has been happening there. OK. Oh, hi. Hey, John. I see your size. OK, your question. Go for it. So basically, that's my question there. Pretty much, we had brought up to you quite a while ago about the site that seemed to outrank us, basically stealing our content and then modifying it so they don't violate copyright and then outranking us. We noticed a pattern when we look back and did some research that it seems that we'll redesign our site, improve our quality, improve in the rankings, then they'll copy it. And sometime over the next month or two, then they'll start outranking us. And it seems to us like somehow the algorithm is confused and is giving that site credit for being the content originator instead of us and then suppressing us in the rankings as a result. I don't know. I'd have to take a look at the site to see what exactly is happening there. So that's kind of tricky from an algorithm point of view to say that our algorithms would always be picking that site over your site for those queries. But what I would generally aim to do there is if these sites are copying your content, then try to find ways to tackle that at the root to encourage them not to copy your content. So maybe look into things like a DMCA complaint. I don't know if that's relevant in your case or not, but anything to try to handle that in a way that search doesn't have to make that guess at which of these versions of the content should be ranking for those queries. Okay, because what they do is they'll like steal it and then they'll modify it enough so that if you file DMCA, they'll say, well, it's not exactly the same thing, even though at its core, it's the same information, but they kind of modify it enough that it isn't a direct copy anymore, so. Okay. It becomes a tricky situation because we've been trying to deal with that. Yeah, if you can send me some examples, I'd be happy to take a look at that routine here. So in particular, some queries, where you see that this is happening and some specific URLs on your site, on their site, where you see that's happening, I think that would be useful for the team. Should I send it via the forum or how's the best way to do that now? If you have a forum thread, it looks like you have one, but maybe just post it there and I can pick it up. Okay, perfect. Thank you, John. Cool. John, if I can jump in on here, I have the exact same issue, but we have maybe a little bit of a little twist. It's with the International Business Times, it's with a site out of India called Pinkvilla. They at least acknowledge us. The Indian sites don't link, but the International Business Times says, Gossip Cop exclusively reported this and day after day, well, I can't say that always, but of recent, let's say the last three days, completely outranking us, traffic has dropped at least 50% and in one instance today, the story in which they said we got the exclusive comments and that they wouldn't have the story if it weren't for us, they were the second position in top stories on mobile, we were nowhere. I mean, and this has happened quite a lot. So I empathize with the other individual here who has this, but in this instance, we're even acknowledged and linked to it. I don't know. I suspect it's a little bit different in your case and the other case, but it's kind of suspicious to have two people on the call at the same time with the same kind of problem. So I'll take into that a bit the team here as well. Hey, John, this thing also happens to us also with the same situation that we like, a competitor actually copying all our pages, they're just remodifying the content itself and now they're outranking us. It's something which is happening from quite long. So we build pages after next year or so, we can get a notification. The new page has been indexed from the computer side and we see it, it's exactly the same kind of content. Content structure is the same and everything and after a week or so, they're kind of outrank us. Can you also maybe drop some examples into that? Maybe in the same thread as the one from Lyle? Sure, I'll do that. Cool, makes it easier to pick up those issues. All right. I'll drop the link to the thread in the chat so people can get to it. Oh my gosh, I will be swamped. Okay, I won't do that then. That's fine, that's fine. Can you explain the issue with using JSON-LD via tag manager? Tag manager is used to verify search console and for Google Analytics, so surely it should be stable enough. So I think we've talked about this quite a number of times and a bunch of these hangouts as well and there are a few blog posts written up about this as well from various people, including I imagine Barry as well. But essentially it goes back to, for us to be able to pull in content from tag manager means we need to be able to render the JavaScript and process the script files from tag manager and the output that is provided there and include that indexing side. So that's something that always takes a lot of work for our systems and is not always something that we do for all pages and in particular when we see that the page otherwise would be the same, then it's kind of hard for our systems to justify that we need to also process all of this JavaScript. And on top of that, a lot of the testing tools, they don't process the tag manager and output either. So it's really hard for tools to kind of confirm that this markup is working properly. It takes longer for it to be processed in search. It might be a little bit flaky and that you don't really know exactly what has been indexed in search at any given time. So those are all reasons why, from my point of view, it's fine to use tag manager for anything else. It's fine to use it for this as well, for JSON-LD, for structured data in for search, but it's worth just keeping in mind that it's not a great approach for especially structured data in search. There are much more straightforward ways to provide the structured data directly on the web page, makes it easier for maintenance, makes it easier for tracking, for testing, all of that. So that's what I would recommend doing there. I'm not saying you can't use tag manager for this. You can't certainly use it. And we'll try our best to pick it up and use it in search, but it's not quite the same level of speed and flexibility and security when it comes to actually providing the structured data directly on the pages. A client did an HTTBS migration by moving using 302 instead of 301. Do they need to change that to a 301? How long will it take for Google to understand that this is a permanent redirect? So probably we will be picking that up as well. Lots of people use the wrong type of redirect for site moves, and we still work on trying to figure that out properly. A quick way to see what is happening is to check in Search Console to see if things are being indexed properly. So if the new domain is working well, then that's probably OK already. That said, if you've recognized issues like this or any other technical issues on a website that are easy to fix and feel like they might have a big impact, then I will just go ahead and fix that, especially the wrong redirect type. That's like a one-line change in the HT Access file on most websites. So that's something that's really easy to fix so that you don't have to worry about search engines interpreting that in the wrong way. Our image galleries have a unique URL per image, like gallery slash image1, image2, image3. We want to add gallery slash view all and use this as a canonical URL. But we don't have this link anywhere on the site. Can we do that? Does the view all need to be visible for the reader? So this kind of goes into the general question of how Google picks a URL to show in Search. And on the one hand, there's the aspect of, in this case, like the landing page for image1, having a rel canonical set to the view all page would mean that these pages are not equivalent. So it's kind of hit and miss if we would pick up and use that canonical at all. I think one aspect to keep in mind. The other thing to keep in mind there is that even when we do understand that these pages could be seen as equivalent, it's a matter of us using multiple factors to determine which of these pages is the actual canonical. So for that, we use a rel canonical. We use redirects if we have anything. We use internal, external links. We use things like site maps, hreflang links. All of that help us to understand which of these URLs is the one that we should be showing. And if the canonical URL that you specify is one that you never use within the rest of your website, then chances are, we'll say, maybe that link rel canonical was a mistake and the webmaster didn't actually need it like that. And maybe we'll have to pick a different URL as a canonical. So my guess is what will happen here is either we will ignore the rel canonical because these are not the same pages, or we will pick one of the other existing pages instead because that's one of the pages that is strongly linked within the website. So I don't think this specific setup would be that useful in your case. What's your recommendation for sites that have a lot of content that they want to provide for other languages and countries, but they only have translated the interface so far, so not the main content? So this is something that happens quite a bit, especially for user-generated content. So if you have a forum or a blog or something, and people are commenting in one language, but you have the setup so that the UI can change to other languages, then you quickly have the situation where the UI could be in French or in German, but the content might still be in Spanish, for example, because everyone is commenting in Spanish. And that's essentially something that you can handle in multiple ways. So you can say my canonical version is the Spanish version, and everything is just like the Spanish version. That's one option you can do. You can use the hreflang annotations between those versions to say this is the most French version of my content that I can provide. The main content isn't in French, but the UI is in French, so user going to that page would be able to navigate around my website. That's something you could do. And essentially, those are the different variations that you can provide there to let us know a little bit more about what your preferences would be. From a practical point of view, that's ultimately more up to you with regards how you want to be shown in search. So if you think it's useful for a French user to come to your site and land on the page with the content in Spanish and the UI in French, then by all means, you use the hreflang annotations between those versions. If you think that a user in France would have trouble navigating your site at all, if the main content is in Spanish, even if the UI is in French, then maybe it makes sense to just keep the Spanish version with the Spanish UI index. So that's ultimately up to you. I think neither of these are perfect solutions. Sometimes it depends a bit on how uniform your content is, how clearly you understand which language versions you want to have index and which versions users expect to see in the search results. So for example, if you're a very international forum and people post in all kinds of different languages, then it's probably tricky to say you only want this UI version index. Maybe it makes sense to have all UI versions index. The downside, of course, to having all UI versions indexed is it multiplies the number of URLs that your site certainly has. So that means we have to crawl a lot more. And if it's a large user-generated content site, that means we have to crawl, on the one hand, all of the user-generated content. And then we have to crawl all the multiples of that for each different language version, which I don't know if that's useful, if that's too much crawling for your website, if that prevents maybe newer content from appearing quickly in the search results, that's something else to kind of weigh off there. If you're just talking about a few thousand articles, then maybe that's less of an issue. We have a blog running alongside our e-commerce website. Since the beginning, the blog posts were marked as written by a generic editor user. Looking at the quality guidelines and EAT, we'd like to replace editor with the real name of the post's author. Is this kind of operation positive, or could it potentially be seen as spam? I think that's a good move. Especially if you can pick up the if you know who wrote the article originally, and you can create an author landing page, I think that's a good move, even just from a pure user point of view. If someone goes to your website and they suddenly see, oh, these articles were written by, I don't know, Barry instead of just editor, and you have a landing page for that author, then that could be a sign that it's better for the user. That could be something that users pick up on or they go to the author's landing page and see, oh, this author is actually an expert in this field and has been active there for so many years. That's, I think, always useful to have on a website in general. With regards to Google rankings, it's tricky to say if that would have any direct effect at all. But at least indirect effects in that users might trust your content more, might recommend it more. I think that's something that would be kind of expected. Schema markup with a desktop and AMP version. Is it OK if the desktop version is implemented using microdata but the AMP version is using JSON-LD? Sure, that's perfectly fine. With regards to the format that is used there, I don't see a problem with that. The only thing to keep in mind is that as far as I know, some kinds of structured data are only available in JSON-LD. So that might be something where you need to double check the types of structured data that you're using. But having one version on your site using one type of structured data and the other version using a different type, even within the same kind of desktop mobile AMP variation, that's certainly possible. So for example, if you have a blog and maybe a product directory on your website, an e-commerce site, and the e-commerce site has reviews that use JSON-LD, and your blog uses article markup that's using microdata, I don't know if that's the right way to do that, but that would be perfectly fine too. Regarding hidden content, display none, is that OK? Google support has mentioned that white background, white font, or font size 0 will be against guidelines. But what about display none? Hidden content is generally not something that we appreciate. In particular, it comes across as the site trying to push keywords into Google's index that are not actually visible on the page itself. So that's something I would generally worry about. You mentioned responsive design, and the rest of your question, I think that's the one aspect that does come into play here. So if you're using responsive design to make this content visible for mobile users or for desktop users, then that's perfectly fine. But if this content is essentially always invisible, like font size 0 or white font on white background or black font on black background, then those are the kind of things that our systems will pick up on and say, well, maybe this text here isn't as relevant as it otherwise could be. And from a practical point of view, you probably won't get a manual action for something like this. But other of them do try to figure this out, and they will try to kind of devalue that content when it comes to search so that it's less likely to be shown in the snippet, less likely to be treated as really important on those pages. After links manual action, how long does Google treat the domain once a reconsideration request is accepted but not regained the potential rankings in traffic? So I think there are two aspects here. On the one hand, if the manual action is resolved, then pretty much directly, that site will be visible in search without that manual action. So there's one exception here where if a site is removed for pure spam reasons, then it'll essentially be removed from our index completely. So it's not that we can just turn it back on and show it again. It will require that we actually recrawl and reprocess that site. And sometimes that takes a few weeks. But for all other manual actions, that's the thing where once that manual action is resolved, then things are back in the previous state. It's not that Google holds a grudge and says, well, there was a manual action here, so I need to be extra careful. It's really resolved as resolved. With regards to links, of course, if your site was artificially ranking in search results due to unnatural links and you got a manual action and you fixed that by removing these unnatural links, then of course, your site won't be artificially ranking higher because of those unnatural links because those unnatural links are gone now. So that's something where it would be completely normal to see a change in visibility after resolving something like that. Similarly, if a site were unnaturally visible due to other things on your website and you resolved that by removing those other things, then obviously, your site will be visible naturally again. But it won't be unnaturally visible due to those things that you removed. So that's something kind of to keep in mind there. Hey, John, related to that, in looking at some of our backlink profile, we had found, I don't even know how our links ended up on these pages, like on pages that have just like 7,000 links and things like that. We disavowed them when we found them. Is there anything we need to be concerned about other than doing that when we find stuff like that? No, that's essentially the right move to do. And especially if it's something that you're really worried about. So I think for most websites, it doesn't make sense to go out there and disavow things that are just iffy and weird, because for the most part, we just ignore those. So in particular, with regards to links, if it's something that when you look at it, you say, well, this could be seen as these links being bought by us, like unnaturally placed there by us. If someone from the website team were to take a look at this manually, then they would assume that this is us doing something stupid. That's the kind of thing where I'd say probably disavowing them or getting them removed would be a right move there. But otherwise, if it's just an iffy link and it looks like it's something like there are millions of other links on there, someone ran a tool and dropped tons of links into this forum, then that's something our algorithms have figured out already. So that's not something I would worry over. OK, thanks. Sure. Do you know what user agent is Google using to crawl dynamic search ads content? I have no idea what dynamic search ads are, so I don't know which user agent that would be. If you're running such ads, then probably you can take a look at your log files to see what is crawling those URLs. But I don't know how that's set up. What do you suggest to tackle low traffic, low quality pages on a site? There are lots of suggestions regarding content pruning. What recommendations do you have regarding that? So I think first off, the assumption that a page that has low traffic is also low quality is something that I would question. Sometimes pages just have low traffic, because not a lot of people search for them, but they're still really good pages. So I would question kind of the assumption that you can just go into analytics and sort your pages by number of page views and delete all of the lowest pages there, because I don't think that necessarily picks up that these pages are really low quality or not. So that's kind of a first assumption there. If you know your website, then obviously you can combine different metrics to try to figure out where the low quality pages are. But I would still recommend making sure that these are really low quality pages before you take any kind of harsh action on those pages. And then as a next step, if you do know that these are low quality pages, whenever I talk to our engineers from the quality team, they tell us not to tell webmasters to just go off and delete those pages, but instead to find ways to improve them. So if you know that they're low quality pages, that probably means you know what is missing. That probably means you know there are ways to kind of make these higher quality pages. So that's kind of the direction I would take there, and not just delete things that are low quality, but figure out a way to make them more high quality instead. So that could be by combining pages. Maybe it's something where you see this one page. It's kind of thin, but it matches this bigger page that you have otherwise on your website. Maybe combining doesn't make sense. So 301, redirecting them to kind of one shared URL instead. That might be an option. Rewriting them to be higher quality is obviously a good idea. Obviously, it takes work. So it's not this one simple magic trick to rank number one. Then finally, if it's really something that you can't resolve at all or that is such a big mass of pages that are low quality that you can't really fix, then maybe deleting them is the right move. So those are kind of the different variations there that are available. But again, I would strongly question the assumption that low traffic equals low quality. So if you're even looking at a larger site, don't just assume that because something has low traffic is a sign that it's not important for your website or for the rest of the way. So I was mentioned. Yeah, so the reason we're asking this one is because I mean, I see there's lots of potential in that specific keyword or the topic itself. But I mean, from the very long time, these people are not ranking well or they're not even getting enough traffic to the pages. So it's again, that's the same issue. I mean, it's not some one or a few pages. It's actually the very big numbers of pages. So I'm just kind of considering what should be the best practice to handle those specific situation where I can see, like, I mean, it's definitely not working. Maybe I can merge a few of the pages, like, I mean, those are kind of similar topics. And then let's see if someone can have some pages can be improved by rewriting the content or by giving better experience to those pages as well. So that was the kind of our concern. Yeah, so I think one way you could look at this is to say, given the state of content that you have, what would be your preferred new website look like? So kind of saying, like, assuming I have all of this content and I had to create a new website out of it, what would it look like? And then to try to find a way to migrate your existing content into this new structure that you have in mind. And like I said, I could include combining pages, combining maybe tens of different pages together into one stronger page. It could be deleting pages where you say, well, these don't make any sense for my website anymore. Maybe it was something that users cared about a couple of years ago, but now, I don't know, nobody is playing Ingress anymore. So all of those Ingress pages on my website, I have to make a hard decision and delete them maybe. I can see the shocked faces everywhere now. But these kind of things happen over time. And it makes sense to clean things up over time. And sometimes that means deleting. Sometimes that means combining. Sometimes that just means rewriting and cleaning up. So it's hard to have one answer that works for every site and every situation. Thanks, John. Just to follow up on that, you said like a structuring of the pages or your section of a website. So I see, I mean, we have a kind of structural website kind of old as well as the new formats. Or you can just structure-wise. So a lot of pages on our website are still at the root folder. And then some of the pages are created under the specific folder structure-wise. So how Google sees that thing like, I mean, the same topic that resides under one root section same under a specific folder itself. And majority of the things comes under the root thing. So how Google sees that specific patterns? I don't see a problem with that. I think that's just a matter of the URLs that you use there. So I don't see an issue that some pages are within this logical part of your website and some pages are within a different logical part of your website that can be completely normal. Thanks, John. How to fix the crawl frequency of low-parameter pages within a website? Will Google crawl more of such pages? Because of the quantity of these pages is more compared to the important pages. So I think this was your question as well. In general, you don't need to fix the crawl rate of pages unless these are pages that are being changed more frequently than the crawl rate. So if you have an article that you wrote and it's being crawled once every three months and you're never changing this article, that's perfectly fine. We don't need to crawl it more often. There is no ranking bonus for being crawled more often. So crawling from our site is more of a technical thing where we say this page has changed. We should find a way to pick up this change as quickly as possible. It's not that we would say, well, this page has been crawled twice in the last week. Therefore, we will rank it higher. Those are completely separate parts of our algorithms. John, this was something that I was checking the log files. And 90% of you can say the crawl budget is going to those specific URLs only. And only 10% you can say it's actually crawling my revenue products. Or you can say my money pages. So that's something I was wondering how I can make them low crawl frequency files for those specific sections. And maybe Google can start crawling or giving more importance to my other sections of the site. OK, so you actually want to do the opposite, which is I think a good move too, to have those pages crawled less frequently. So from our point of view, there is really no way to do that. So it's something that you would need to almost attack from the other way around to say that I think these are the pages that are important on my website. And therefore, I'll link them prominently within my website. I'll make sure that all of my other pages refer to those pages, that they're specified in the sitemap file with the last modification page that we can confirm. So all of those signals to help us understand we need to be able to crawl these pages more frequently because there are changes on these pages. On the other hand, if there are no changes on these pages, we don't really need to re-crawl them more frequently. So that's kind of the other aspect there. If these are pages that are important for you, but they are not changing frequently, then there's no need to artificially force them to be crawled more often. So John, can we put the 304 status code on those specific pages, which is not getting frequently changed, but still Google thinking they are kind of important and they are crawling those ones? I think 304 is what we don't do for I-Sense requests. Yeah. So you would only use a 304 for if modified sense request. So returning that for normal get request wouldn't make sense. OK. Yes. All right. Can you tell if with redirection only link penalty passes or link penalty and content penalties both pass? For example, a website with pure spam manual action is redirected to another site. So technically, the URLs would be soft 404. Will it affect the redirected website? So I'm not quite sure which part of this question you're kind of focusing on. On the one hand, if a random spammy website redirects to your website, that's usually something we can recognize and just ignore. On the other hand, if yours is that spammy website and you're redirecting to another website to try to escape that penalty, then probably we will be able to follow that site migration and apply that manual action or algorithmic action to the new website as well. So my recommendation there would be instead of trying to get away by doing fancy redirects or other types of site moves, I would recommend just cleaning up the issues so that you don't have to worry about those issues anymore. So if there are link actions with regards to that website, then clean up those links so that you're in a clean state again. The reconsideration process is great for that because someone from the web spam team will take a manual look at your website and they'll say, this looks good. This is fine. Like you did good work. You cleaned things up. It's clear that you understand what you should be doing now. So we can remove that manual action. So I think that's really useful to have there from a practical point of view. So that would be kind of my recommendation if you're the website that has this problem. On the other hand, if, like I mentioned, some random website redirects to your website, then that's usually something that we can recognize. This is not a normal site move. This is just a website redirecting to another website. And we can ignore it. John, two quick general questions. One related to site load speed, we've read and heard various things, including recently people saying that every microsecond counts and things like that. What is the current policy? I know in the past you said, as long as it's not load, you're fine. What is the current policy? So the speed is something that does matter quite a bit to us. And it has a big effect on users. So that's something that I would personally take quite seriously. And I think the nice part about speed is there are various tools that gives you pretty objective measures there that you can actually work on with regards to a lot of other issues around SEO and the quality of your content. Things like that, speed is something that is quite measurable and something that you can work on. And it should also be something where you see a direct effect from your user's behavior within your website. So it's not just something that from Google's point of view, like we say, speed is important and it is a ranking factor, but it's something that you will see directly when users come to your website and your website is suddenly taking a couple seconds longer to load, those users will react quite differently on your website. And you'll have more trouble converting them into customers. However, you define customers on your website. If we're talking, sorry to interrupt. From the standpoint of if it's 1.1 seconds versus 1.2 seconds, that kind of thing, would you say that that's very important to try to really optimize those? I think the tricky part with speed is there's so many different measures in the meantime that it's hard for me to say like load time is the only thing you should be thinking about, but there are ways to kind of determine how quickly the page is generally accessible, how quickly the content is visible on the page, even kind of ignoring the aspect that maybe the rest of the page below the fold is still rendering and still takes a bit of time to actually be ready, maybe the part that users care about is actually visible fairly quickly. So from that point of view, usually small differences are less of a thing, but kind of like I mentioned, speed is something where you can use these different tools to come up with the different metrics. And you can focus on those metrics and try to improve those. And you can measure that yourself. And you can kind of work on that without having to go through various Google tools and waiting for things to update in the index and in these tools. OK. And then the other, just real quick, is there any issue at all with having your meta title be equivalent to your H1 tag on a page? No. That's perfectly fine. Thank you. Sure. Can I use Google official videos in my blog? Or can I only link to them? For example, Matt cuts videos about SEO. I will use AdSense on the blog when I have enough posts. My blog will be complete in six months. I don't think there are any restrictions with regards to embedding videos for my channel. But if there were restrictions, then I think the embed option on YouTube wouldn't be available there. So if the embed option is there, then go for it. I think in general, I'd be cautious about using just a video as the primary piece of content on a web page. And you should really work to kind of use the video in a way that supports your primary content. But not that it replaces your primary content. So for example, I won't take any of these videos and just put them on a blog post and add a title to them and expect them to show up highly in search. But if you have specific content around that video, if you have a transcription of that video and you have some comments to that transcription, to the content that is shown in the video, or you're using that video as kind of a point of reference with regards to your content, then I think that's a perfectly fine approach. But just purely using a video on a page is something that, at least from a web search point of view, makes it really hard for us to determine what is actually useful on this page and why should we show it in the search results. If I pay equal ads, will my ranking be better or worse? So we get this question every now and then. And the question here is, will my ranking be affected? But the better or worse part is something that we also hear. So some people say that your ranking will get better if you use Google Ads. Some people say your ranking will get lower if you use Google Ads because we want you to buy more ads. And neither of those are true. So our search results are completely independent on whether or not you use Google Ads. They're completely independent on the technologies that you use on your website. So if you use something like Analytics or some other tracking tool that's totally up to you, if you monetize with AdSense or any of the other ad networks, totally up to you. So whether or not you use Google products within your website, you use other Google services for your website, is totally up to you. That's something that we prefer to have these services kind of stand out their own. And if you say, this one particular Google service is not great, I don't want to use it, then feel free to use something else. We don't want to put you in that box where you're stuck between focusing on your website and doing what you think is right for your users and having to use this one particular product. So it's really the case that we don't tie these together. And we do that explicitly. And we work hard to make sure that these things work well independently. All right, looks like we're running low on time. But maybe there are some questions from you all that I can help answer. Anything is left. I got a couple. OK. So the first one is related to a project we started to work on. This is the website. Basically, they're using Angular to render their content. They render a very basic version of the website. And so they serve a very basic version of the website. The rest is rendered by Angular. And they received a penalty for spammy structured markup. And what we've noticed is that they put their markup in the server render HTML. But the actual, let's say, product content is rendered afterwards. So the information, everything, the price, images, and everything else. So we were wondering if it's that kind of a problem where it's because the markup is served directly, whereas the actual content viewable on the website is rendered on the client side afterwards. We did notice, so they're using product markup and image gallery markup. They wanted to use image gallery. They thought that was appropriate for the product images. So we suggested that they use the actual images within the product markup. But I don't know if that's a reason why they would actually get us a manual action. Obviously, they have no malicious intent or anything like that. So I was just wondering if it's whether they were using Angular and that client side rather than it happens. We tried to solicit a review to ask for more information, but we just received a blanket response, so not much more information. OK. It looks like the manual action is pretty old there. So I don't know when we switch to Angular. It might be that there's something else behind that. Well, the review was just a week ago, and the action is still there. So it seems like whatever is causing it, whoever looked over the review still thinks there's a problem. Do you think that might be the case with the client side rendering? I don't know what the website looked like back then. So that's sometimes a bit tricky. And I don't know which specific type of structured data was at play there. So I know with the newer structured data manual action messages that we send out, we try to give a bit more information, at least say which type of structured data was the problem. So I'll double check with the team to see if we can switch those out and at least give some more information for the older structured data issues as well. Right, it just says spamming structured data affects all pages. And there's not much more information to go on other than just remove everything. And they don't want to do that. I don't think that's a good idea, because I don't want them to reapply it and then get it again. So I'm not sure what's the best. I'll take a look at that and pass it on to the team here to see what we can do to kind of clean those up, those older structured adaptations. OK, OK. And one more thing regarding this. So this is also a project we've been working on and has been affected by that August 1 update last year. And unfortunately, it looks like it's been so the website lost about 70% of the traffic now. Some of the main keywords dropped from top three to page three or something like that. So it's been pretty rough. We know there's some issues. They don't have complete control over the platform to make a lot of changes. And it seems some other websites using that platform were also impacted, but they recovered within a month without them doing anything. In this case, it just looks worse and worse as the times goes on. It's a Romanian site, so I don't know if that has any place or any factor, but it just, again, we know there are some issues or improvements to make, but I don't know if that's severe that it should go from top positions to page three or four or something like that. I don't know. So a lot of those changes that we made over time are more around trying to recognize what we consider to be relevant pages. So that shouldn't be something where the platform itself plays a role. So that might be a sign why some of these other sites that are using the similar platform are ranking differently now, that maybe we see that from the content that they're providing on that platform. It makes sense to rank them differently now, but it's hard to kind of offhand review a website and say, they need to do this, so they need to do that. So I can take a look at that with the team here, but I don't know if there'll be something specific that we have to kind of get back to you about that for. Sure. So the other sites I mentioned, they're not also in Romania, they're in other countries. The only difference is they did nothing different. They didn't add any content. They didn't change any of the design, anything else, and they recovered after just a month or two. But in this case, the Romanian side, nothing. And it still keeps going down, so I think it's 80% now traffic loss. And it just seems like a severe penalty for some reason rather than just, well, you need to do some improvements here and there. Like, nothing is relevant anymore or something like that. Yeah, I mean, sometimes the relevance algorithms can change in ways like that. Sure. That's completely out of question. I don't know the side of the question, or the queries, and everything like that. So it's hard to say, but that's something that can happen where our algorithms, we update things, we test them, and we say, well, these are the new algorithms. And then suddenly, some sites see drastic drops in rankings, and others see nice rises in rankings as well. So that can't happen, too. But I can take a look at that. In general, just because it's Romanian doesn't mean that it's problematic or something that we would worry about. Really, not English. I just want to make an English website. No, I think that should be totally fine. And if something like the language or the country targeting were an issue that we couldn't figure it out properly, then I would consider that a bug and something that we need to work on. But non-English content has been around for a long time. Sure. We should be able to deal with that. The only reason I'm putting this forward is we've been working with them for six or seven years now, or five or six years now, something like that. And they're the only one that have been so drastically affected. And it doesn't look like we did anything wrong. From our point of view, obviously. So yeah. OK. I'll take it. Yeah. Thanks. Cool. Any other last question before we wrap up? I will try quick. Just one question about frequently asked questions. I mean, this URL like FAQ. Now, for example, I read on the internet some case studies. It's important to put some questions to this page and to give some answers to your clients. And it helps to optimize for audio search. And I'm interested how I need to make this structure. For example, if I put all these questions to just one page and I don't think I can take location of this keyword. If I make some slashes, many small pages, but it will be thin content. What you can propose in this point? How to optimize this page? So I don't think we have any FAQ content markup at the moment. So what we have for structured data is the Q&A page markup, which is essentially one question followed by one or more answers. And that's something that usually makes sense for something like a product support page where multiple users can submit answers, or a forum page where multiple people submit answers to a single question. But for FAQ pages where you write questions and answers, that's explicitly something on the Q&A page documentation that we say is not supported. So that's something where maybe we'll have more in that direction in the future. But at least for the moment, I don't think we have that support. Because I saw a few different types. For example, some bloggers just have only one page with all these questions and answers on these pages. But I saw different situations when they have like slash and page one, answer one, answer two, answer three, like many, many different pages. But in my point, I need to use thin content because I don't need to explain a lot of content for some simple question. And I think if I create this structure, it will be a lot of thin content. But if I put it for one page, I'm not sure about relevance of this page because some famous bloggers, they recommend to optimize this page for audio search. And they even show some proof with screenshot. If you use keywords with seven, eight words, it means you can catch this page. Yeah, I don't know if there is any generic guidelines that we could give for sites like this. I think having FAQ content on your pages is perfectly fine. That's something you can structure in multiple ways. That you have one page with a lot of questions or one page with just one question and answer, or maybe you have more information on the answers where you need a complete page for that. But from a structured data point of view, from a Google search point of view, there's nothing, at least that I'm aware of, where you need to do it one way or the other. I have seen some good examples where sites appear in the featured snippets because they have a nice setup with common questions that people actually ask. That I think is pretty cool if you can make that happen. But obviously, you need to think about whether people are actually asking these questions and whether you really have an answer that is useful for you as a business if it's shown in search. So if, I don't know, someone asks, like, how do I join a Google webmaster hangout and you give an answer for that? Is that really something that you have any value from for your website if it's shown like that? Whereas if you have an answer to maybe you're a manufacturer of a phone, like, how do I reset my phone? You give a couple of steps to do that. Then that's something that saves you an extra support call, which makes sense for you. So that's kind of something where I think it would make sense to focus on that type of content. So I wouldn't blindly just go off and try to create content just to match queries that you've seen and really think about the bigger picture. What makes sense for you? Why would you want to have your pages shown this way in the search results? OK, thank you. Hey, John, I got one question. It's about the Search Console API. So the site that I work on has billions of pages. And through the Search Console API, we seem to only be able to extract out 50,000 pages and queries. Is that expected? Or should we be able to get access to all the pages that Google's serving up and the impressions and clicks and all that kind of stuff? I think there's a cap somewhere, but I'm not sure where that is. Miha probably has a bit of experience there. What I've done in the past for cases where I used the API to pull a lot of data on different pages is just to refine the queries that are used for the API. So instead of trying to pull the data for a whole month, I would pull it out on a daily basis and then just aggregate those pages on my site. So that might be something that works. I don't know, Miha? Yeah, so I have been doing pagination. And so it caps out after I hit 50,000 records. And as well, it seems like the filter groups seems to just filter based on the original 50,000. So if I pull down, I'm also doing this per day. So if I just pull down one days with the data, I'll get just shy of 50,000 records. And then if I use the filter groups to filter on that, it's just a subset of the original 50,000 records I pulled. It doesn't expand beyond that. So there is a limit that we store for a site per day with regards to the number of entries that we have for search analytics in general. But I'm not sure where that limit would be. Miha, what's your experience? I know there used to be, so I know last time we went in theory if we talked about this, it did look like there seemed to be a bug when the start date was equal to the end date. So it was a day request, a request for just a single day. It always seemed to cap out at 5,000. So it's much less, not 50,000. So I'm not sure if that's intended or not. But if you go for two days, you get the normal amount. I even got 200,000 or 300,000 rows without any issues. So did this seem to be some issue with a day request for a single day? I don't know if that's intended or not. OK, yeah, I did see the 5,000 limit. If I pick just query as a dimension, so I'll try to expand it to two days. And if that gives me what I need, then that's fine too. Cool, thanks. Cool, thanks for answering these questions, Mihai. We should have you on here more often. Cool, all right, so let's take a break here. Thank you all for joining in. Hopefully my internet wasn't too bad. Otherwise, I'll have to figure something else out for when I'm at home. And I hope to see you all again in one of the future Hangouts. So thanks again for joining and see you next time. Thank you. Thanks, John. Thanks, John. Thanks, John. Bye.