 All right. Welcome, everyone, to today's Webmaster Central Office Hour Hangouts from home. My name is John Mueller. I'm a Webmaster Trends Analyst at Google in Switzerland. And part of what we do are these Office Hour Hangouts where folks can join in and ask any question around their website and web search. And we try to come up with an answer for you. A bunch of stuff was already submitted on YouTube. So we can go through some of that. But we can also go through some kind of urgent questions from people who are joining here live. Anyone has anything? I mean, if not, I'll just jump into the questions from YouTube. And you can stop me at any time along the way. And I can't answer your question. All right. Go for it, Barry. So I've been obviously tracking how you guys are manually changing the search results constantly. No, I'm joking. But the algorithm seems to be, it's impossible that the tools that are tracking the changes to the Google search algorithm rankings, like the 10 listings and so forth, that there's so much change because of how maybe search or behavior is changing that Google's algorithms are adapting to it. Meaning, I mean, if you look at these tools, like the past couple of weeks, it literally feels like there's a Google algorithm update every day in terms of how much fluctuations there are in the tools that are tracking. One master is saying my rankings are changing. So is it possible that search or behavior is influencing the algorithm itself? I don't know. So I haven't been watching what's happening with these tools. So that's probably something that you see more. But in general, we do try to adapt our algorithms to provide the information that's relevant for users at the time when they need it. So that could be something where things are kind of evolving to make sure that people have the right information at the right time. But I imagine that's more specific to certain kinds of searches. I don't think it's like we would just change all of the search results around. So if you're looking for a manual for washing machines, like why would you change the search results for that? Things are essentially still all the same. But I do know that there are various teams working at Google on improving the search results, and particularly around kind of the whole crisis situation where people have higher expectations from search. And they want something that they can trust. And that's probably something where we're trying to provide, I don't know, better quality of service or good quality of service, at least, for those that are looking for this kind of critical information. So you wouldn't think it's specific to anything? If it's not COVID-related searches, you don't think maybe the way people are searching less for or more for buying toilet paper versus less for going to the movies, that would influence how the algorithm maybe adjusts. Does that make sense, that question? Like the search trends influencing the Google algorithm and the rankings around that. Yeah, I don't think that would be happening. I mean, it's something where we always see user behavior shifts. And sometimes certain topics become really popular, and we try to show the right search results for those kind of things. And this happens all the time. This is one of those things where it probably is a little bit bigger and is lasting longer than kind of like the Oscars when they come and go. But these kind of shifts are things that our algorithms have to watch out for anyway. So it's not something that I'd say would be specific to this current situation. Thank you. Sure. Just have a question, John. OK. So today I found out that there is a website that is scrapping the hell out of my content. And I found out that it's ranking very close to my website on Google search results. This also happens when I look for some technical solutions for my software problems. And I find some Stack Overflow links. And then it doesn't satisfy my need. I scroll down a little bit, and I find exactly the same question with the same content. Does Google make any kind of efforts to ban or penalize these scrappers? Or how does it work if I want to report this website that is stealing my content? Well, I think first of all, if it's your content and if it's copyrighted content, then maybe the DMCA process is the right approach. That's something where you need to double check things from a legal point of view. So I can't give you legal advice. So I can't tell you like you should do this or you should not do that. But actually, my website is about legal advice. So I can handle that. OK. Perfect. Yeah. So maybe that would be a process that would work there. That's generally the most direct way to resolve this kind of thing, where if the content is taken down, then that's resolved in search. So that kind of just automatically works out. Otherwise, the web spam reports are something that you can do. In general, we do look at the web spam reports that come in. But we don't take manual action on every single one because we try to use them to improve our algorithms overall. So that's kind of one thing where it's useful to get feedback like that. But I wouldn't see it as something where within a couple of days, you will suddenly see that website disappear from search. Those are generally the main approaches there. Sometimes it's also tricky in that if we don't have a lot of other good content for that specific query, then we end up showing these kind of things. So oftentimes, if you search for a quote, then someone else will have copied that. And we'll show that in the search. But that's because you're searching for that one specific piece of text. And it's like we have that page index. Maybe we don't rank it first for that piece of content. But it's known to us. So we would show it in the search results if you're explicitly looking for that. So that's something where sometimes the perception is also a little bit skewed in that if it's your content, you explicitly search for it. And you see some spammer also ranking. Then you're like, that shouldn't be happening. But at the same time, maybe there's nothing else that we would show there. And we happen to know about this page. So at some point, we'll just show it. Yeah, gotcha. Thank you. And I think another thing to also keep in mind is that sometimes people have more information to individual pieces of content. So we see that all the time with our blog posts, where someone like Barry or someone else will take it and quote parts of the blog post and add some more information, or open things up for comments from other people. And that's, from our point of view, that's kind of a normal part of the web. Like technically speaking, you could look at that and say, well, they're copying the content. But actually, they're providing something different. It's a slightly different version of the content. It's maybe a commentary on that piece of content. Maybe it's just other people's comments that are also around that piece of content. And that's something where we would say it's not even about who wrote that first, which one of these should we rank higher. It's just it's a separate piece of content overall that happens to have quotes that are the same. But essentially, it's a unique piece of content. Thank you. Thank you for the clarification. Sure. Can I jump in with the question? Sure. So in another hangout, you mentioned that the publisher or organization schema doesn't need to be on every single page of the website. If it's on the home page or the contact page, that's good enough. Now, what about if we're talking about an article or blog post schema that's on an article? Does it need to be on every single paginated page of that article or can it just be on the first page of that article? What are you trying to achieve with the article markup there? I think that's kind of what I would look at there. If there's something that you're trying to achieve with that article markup that's relevant to the whole set of those pages, then put it on all of those pages. If it's something that's just the intro of the article is relevant enough for that extra piece of markup, then maybe it's enough to just have it on the first page. That makes sense. Thank you. Hi, Mr. Moller. Hi. Can I ask you a question? Sure. I'm Vahan, lead developer at Search Engine Journal. So I'm looking at Google Discover specifications and read how to enable the website to use a large image in the Discover. And it says one should use high-quality images at least 1,200 pixels wide and ensure that Google has a rise to display your high-quality images to users, either by using AMP or by filling out the form expressing your interest to opt-in program. So my question is, is it possible to have large images in the Discover still without AMP? Yes. Yes. So I think they want to move away from that form. And another option that you can do, I don't know if we documented it there or in the other documentation, is to use the Max Image Preview robots meta tag, where you can specify, I think, large as the large preview image. That would also apply to Discover. OK, thank you very much, because I did notice that CNN website sometimes has a large image but doesn't have an AMP. And I wondered that Max Robot image could you please repeat? Let me see, Max Image Preview, I think it's called. Yeah, Max Image Preview. Meta tag. OK. Thank you very much. Sure. Appreciate it. OK, let me run through some of the submitted questions, and we can get back to more questions from you all towards the end. And of course, if you have any comments to individual questions or answers, feel free to jump on in. Question for e-commerce website, from March 21, we saw that Google removed the indexing of most of our categories. Suddenly, all of our categories are marked in Search Console as duplicate to unique category, and Google is picking up another canonical or declaring not available. When rendering the pages in Search Console, I can see that sometimes none of the images are rendered, and sometimes they are. I need to precise that we implemented JavaScript to load the product listing in the category. Do you think that could be the reason, even if Google is rendering different product names with different links, possibly for Google, this is the same page? It's really hard to say without looking at specific pages, but often that sounds like something that would be more of a technical issue with regards to those pages. In particular, if you're saying this is a specific type of page on your website, if there is no other reason for things to be dropped from Search, then my hunch is that for one reason or another, we're seeing the same content for these pages. And if we're seeing the same content, that can either be because on your server, you're doing something unique for Googlebot, where maybe you're serving a page that shows a server error, or maybe you're showing a page that says, it looks like you're a robot. Can you fill out this captcha? It could also be that it's something where you're providing the content in JavaScript in a way that we can't process properly. So those are the options that would fall into the situation. And usually, these are things that you can double check with the testing tools, where you can work out, is this a technical issue in my infrastructure, or is this something with my JavaScript implementation that I have, and then you can work to fix that. What I would do here is maybe post in the Webmaster Help Forum with the specific, so some sample URLs that you're seeing this with. Alternately, if you feel it's really tied to the JavaScript side, we have a JavaScript working group, which is kind of a closed forum that you can just join in, and you can ask questions there as well. So those are probably the two directions I would take there. Can you give us more details about how image indexing and ranking works? We know many things about HTML URLs indexing and ranking, but when it comes to images, few things are really known. For example, how does Google decide to rank an image instead of another? Are backlinks important for an image? Does Google use cloud vision technology to rank an index images? How important is a text surrounding images? So many things. OK, I feel this is almost a topic that would deserve a completely separate hangout. So I don't know how much detail I can go into here, but we do have a fairly comprehensive help center article on best practices for Google images, which includes a lot of the things that you're asking about here. In general, what I would assume with images is that Googlebot doesn't see the contents of the image and instead needs to understand the context of how it's used from your web page. So essentially, we look at the HTML page. We see there are some images embedded there. There's text on the HTML page, there's text around the image, there's an alt attribute associated with the image. All of this is on the web page, but it's about the image. And for us, the combination of the image plus the web page is essentially what we need to use for ranking. So if someone is searching for, I don't know, a beach vacation, then we're not going to look at all images to see where there might be a beach on that image, but rather we'll try to find matching web pages that have unique and compelling images that we can show that kind of apply to this situation, this query that someone is looking for. So that's kind of the thing that I would watch out for is really like we primarily use a web page to understand the image, and we always need the combination of a web page and the image file when it comes to ranking. Can general mistakes, such as missing a space between two words, showing 20 products, but a heading, you're showing 10 products, replacing a hyphen incorrectly in schema, heading tags, et cetera, affect SEO? So I guess it depends on where you make those typos. When you're talking about structured data in particular, since you mentioned schema, that's something where if the structured data is wrong, with a typo, for example, then we might not be able to process that at all. Because structured data is something that we process automatically, we need to be able to process that one to one as it comes in. And if there are errors in that structured data in a way that we can't parse it, then we're not going to try to interpret it. However, when it comes to content, if there are typos in your content, if you have extra dashes or periods or spaces or things like that within your content, usually that's less of an issue. That's something that happens on pages. We understand that. And I think the only case where these kind of typos would be problematic is if with those typos, for example, you're making it impossible for us to understand what the primary content is on a page. So for example, if you have a specific product name on your website and it's mentioned on one page one time and that one time it's mentioned completely wrong, maybe with missing spaces around the product name that you're describing, then it'll be really hard for us to understand that this is a unique product name that's actually shown in an incorrect way on a page if we can't extract that out as a word, for example. However, if you have this product name multiple times on a page like you normally would do and one of them has a typo, then that doesn't change anything for us. We can understand that this page is about that product. So that said, so normally things essentially just work out with regards to typos. It's not something where you need to have a 100% perfect page. I don't think any web page is completely perfect anyway. So it's usually not a matter of something where you need to panic if you recognize typos on your pages. Page A is canonicalized to page B and page B is canonicalized to page C. How does Google treat this? So first of all, I think what you need to figure out in a case like this is what you actually want to have happen. So do you want page A index? Do you want an index as page B? Or do you want an index as page C? And as soon as you work out what you want to have happen, then you should make it as clear as possible to Google what you want to have happen. So if you're working with the rel canonical link attributes, for example, then make sure that you're consistent, that you always point at the version that you do want to have index, and then we'll try to follow that. On the other hand, if you're using rel canonical in inconsistent ways, if your internal linking is inconsistent, your site might file mentioned things in different ways too, then we're going to have to make a guess. And it's not that you can determine ahead of time what will Google guess, but rather, it might be we guess this way. Might be we guess a different way later on. It might be that it changes over time as well. So with canonicalization, it's really like if you want something to happen, make it really clear to us. The other thing with canonicalization that people kind of forget, I think every now and then, is that canonicalization is for us essentially a way of picking a URL to show in search. It's usually not a matter of kind of things ranking in different ways. So if we pick one URL to show in search and a canonical, or we pick a different URL, if the content is exactly the same, we would rank it exactly the same. So that kind of decision around canonicalization wouldn't cause your site to jump up or jump down in search. It's just, well, this URL or that URL. In the end of February, I bought an expired domain. Afterwards, set up all things fresh and verified with Search Console. Then I realized they're crawl anomaly and software for errors. I tried to fix them with 301s or 410s, but it's still showing errors in Search Console. How can I get those off? So in general, if you have a domain that was used previously, then we probably have a bunch of URLs that we know about from that domain. And we'll try to re-crawl them over time. And if we notice that they don't work because they return a 404 or if they redirect somewhere else, then we might flag that as a 404 or as a soft 404 in Search Console. And that's perfectly fine. I mean, these URLs are no longer valid on your website. So Google sees errors, and we show them to you. And then you can look at those errors and say, yeah, that's on purpose. I removed all of these URLs. Or I didn't reinstate all of these URLs. And that's perfectly fine. So it's not something that you need to suppress or hide or anything like that from Google. The other thing maybe worth mentioning is if you're using an expired domain, just because there used to be a website there doesn't mean that you have any kind of advantage of using an expired domain. So if the expired domain name is really what you wanted, kind of as a domain name, then that sounds like a good thing. On the other hand, if you're just picking up that expired domain name in the hope of artificially being featured high in Search because that previous website was actually pretty good, then that's not something that I would expect to happen. I want to remove an image in Google Search. I used the Google Outdated Content Removal Tool. Submitted a request, put the status of the removed content in Google. When I visit the image, it's not removed. So it's really hard to understand exactly what you're trying there. I would strongly recommend posting in the Webmaster Help Forum, maybe with a screenshot so that someone can take a look and see what is happening specifically with those pages that you're mentioning. One thing to keep in mind is the outdated content tool, I believe, just updates the content that we have indexed in Search. And if you're using that for images, that would be the wrong thing to use there. You'd need to remove that page from Search, essentially. The other thing to keep in mind is that Google doesn't remove the images on the server. But rather, we can remove them from the search results. So if you're searching for something and that image is there, then that's something that you could potentially remove, depending on the situation. On the other hand, if you take that URL, where the image is hosted and you try it out in your browser, it might still be that it still works, because that's something that we can control. So those are some things to keep in mind. The other thing, maybe also worth mentioning, is that in particular with images, sometimes the same image is used on multiple pages. And if you remove the image once, it might be that the same image from a different location pops up in the search results afterwards. And essentially, you just need to watch out for that and then also have that other location removed as well. In the view of the recent nofollow changes, what is now the best practice in terms of links to a PDF version of the same page? Previously, we would just add nofollow so that the original HTML would appear in the index. But now PDFs are sitting ducks for indexation. We can't, I guess, add a noindex instruction to the PDF and we don't want me perceive duplicate content issues. So what can you do? So first of all, you can do a noindex for images. That's essentially the xrobots tag. That's an HTTP header that you can add to the PDF files, well, not to the files, but to what the server returns for those files. And with the xrobots tag header, you can use any of the robots meta tags. And I believe even in our robots text documentation, we have examples of specifically how to block PDFs from being indexed for probably both Apache and NGX. So I will double check that. The other thing with regards to nofollow, my guess is that this will continue to work essentially the same way internally within a website in that we just won't treat that link as being as important overall. However, like before, just because you have a nofollow link to that PDF doesn't mean that we will never see that PDF. Other people might have other links to that PDF. We might see links from other places in certain periods of time where we see the normal link to the PDF file and we'll go off and index it. So even previously, just having a nofollow link to a page does not mean that that page will never get indexed. My blog is just three months old and my posts would sometimes keep appearing on the first page and then suddenly disappear after one or two days. Why does this happen? It's really hard to say, but my guess is with a site that new, it's just a matter of our systems not being 100% sure how they should treat kind of the new content that you're providing there. Over time, as we understand the website better, we're a little bit better at understanding how new content on this website fits in with the context of the rest of the web and we can work out which of these things we should be indexing faster or slower or showing differently in search, but especially if it's a really new website, then sometimes we have to guess and then our algorithms might say, oh, we'll index this quickly, it looks like something good and then afterwards we might be like, well, maybe we didn't need to do that and that's usually something that settles down after maybe half a year or a year or so. In terms of headings best practice and particular headings that currently include internal links, is there a benefit to moving these from the H2s to the body of the text? So I think there's an example here. I took a quick look and some of the H2 headings are links to other pieces of content, some of them are just headings. From my point of view, that's totally up to you. I don't know if there are usability issues that you might need to watch out for with regards to headings that are also links, but in general from Google's point of view, these things don't really matter that much. It's something we do use headings to better understand the content on a page, but when you're talking about those links from one page to another page, I don't think that link being in a heading would play any visible role at all with regards to search. We had a situation where one of our keyword website is ranking at position two, while against the same keyword, it's on position six for desktop results. Why is that? That can happen. So on the one hand, it's possible for rankings to change fairly quickly. It's also possible for things to be in experiment, essentially, where different people, when they try it out, they might see slightly different results. That's completely normal from our point of view. It's also possible that there are different rankings from mobile and from the desktop search results. That from our point of view also makes sense. So in particular, if we recognize that a page is not mobile-friendly and that when users go to it, they can't really use it on their mobile device, then that's something maybe on mobile search results we would show a little bit lower. So these are the kind of things where sometimes the mobile and the desktop search results are just slightly different. Can excessive div tags in anchor text reduce rendering performance? I have no idea how that would work out. Essentially, with regards to rendering performance, we use something that's very similar to normal Chrome. So if you use a normal performance testing tools in Chrome that you have available, the Lighthouse test, the different other testing tools that essentially build on the Chrome stack, and you see that you can improve the rendering performance of your pages, then that's always a good thing. It's good for users. And if Googlebot can render pages a little bit better, then that's also good for our servers as well. But whether or not just having excessive div tags on a page would cause a significant difference in rendering performance, I kind of doubt that. But I'm sure you could create a test page that does cause this kind of an issue. On our e-commerce and Ahref and menu filters, then the same Ahref is used somewhere in the middle of the primary content. Is there any difference? Don't quite understand the question, but I think it's kind of like if you have the same link on the page in multiple places, is that a good thing or a bad thing? From our point of view, that's perfectly fine. There's nothing special that you need to do if you have the same link on the page multiple times. That's kind of a common scenario that happens with a lot of sites. So that's not something I would see as something that you'd need to artificially suppress or change. Some info is about the consequences of COVID-19 and search results. How does Google react when such a word appears or what's impact on search results? I think Barry talked about this in the beginning. Essentially, these kind of situations, I mean, this is certainly a unique situation, but these kind of changes happen all the time. I believe maybe a couple of years ago, we mentioned again that something like 50% of all searches every day are completely new. And these kind of things happen all the time. Suddenly, something completely new comes up and people search for it, and we need to figure out what they're searching for, what does that actually mean, and which pages will be relevant to show there. And we need to do that in an automated way in the sense that we can't manually double-check every search results page that we ever show to people because there are just so many different variations. And like I mentioned, like so many new kinds of searches every day. So this is something where our algorithms, from what I've seen, just from personally browsing around, it feels like our algorithms are able to deal with this fairly well. And I think that's almost also a testament to all of the work that the engineers and the ranking teams have been doing over the years to make sure that whenever something completely new comes up, we're able to deal with that appropriately. I have an eight-month-old website that ranks for keyword city plus SEO, position 19 in the non-US search results or locations. But when I check my rankings, US, it seems I'm ranking very big. Last month, we temporarily reached the first page of Google and we started slowly losing rankings again. Yeah, I don't know exactly what to say here, but essentially, we do have different results in different ways in the sense that if we can tell that someone is trying to find something local and we can tell that a specific webpage is matching their intent in a reasonable way, then we'll apply that in the search results and rank that appropriately. So if you're looking for kind of one nation from one kind or one keyword in one country and you're looking for the same keyword in a different country, it's usually pretty normal that we would show different results for those kind of things, especially if you're asking about something that's more locally related. So if you're looking for, I don't know, washing machine manual, then probably we don't need to use geotargeting that much to figure out which of these pages we should show. But if you're searching for washing machine repairman, that's probably something where it makes sense to use geotargeting to figure out this user located, which of the web pages that we have would be best suited for this user at this time. So that kind of locally search results and different rankings, that's pretty normal depending on the kind of query. We're a hosting provider and some of our customers are hosted on subdomains of our domain. Some of these domains are 301 redirected to our own domain. Some not, therefore we have some valid and mini excluded of those subdomains in our Search Console property. Unfortunately, we can't change anything about how we handle those customers hosting right now. So all official guidelines for permanently removing URLs are not possible. Next idea would be manually removing those subdomains in Search Console with a removal tool every six months. I don't think you need to do that. So essentially, if these are pages that no longer exist or where you have a new page that replaces the pre-page, then serving a 404 or serving a redirect for things that have moved, that's perfectly normal. In Search Console, we will show those as excluded in the reports, but it's not something that you need to manually fix. You don't need to suppress those errors or that status that's completely normal. That's something we show that in Search Console so that if you're not aware of this situation, you can follow up there and see why are these pages excluded? And then from their workout, was this a technical issue on my server, perhaps, or in a certain time, or is this by design? And if it's by design, if you really remove those pages, if you don't want them serving anymore, if you want them redirecting or whatnot, then that's perfectly normal. It's not that we will rank your website anywhere lower just because you have pages that are currently not valid. From our point of view, it's a sign of a healthy website if you're serving the proper status code. So if things are gone and you tell us they're gone, that's perfect. In the mobile-first indexing world, will the hidden content behind tags and accordions still be devalued, for example, because there's a lower chance that it'll be seen by a user? No. Specifically, when it comes to content on mobile pages, we do take into account anything that's in the HTML. So if there's something there that might be visible to users at some point, we will include that in the indexing. So that's completely normal. If we have a fragment URL in an href, does Google completely ignore that whole link or just the part after the fragment? We essentially see that link, and we will drop anything after the fragment and just take the earlier part of the URL and assume that's the page that we can fetch from the server and that's the page that we can index and that's the page that we'll use to forward any signals with regards to links on those pages. I've seen a ranking drop on my domain since the March update and noticed in Search Console that many of the staging links were referencing the live domain. But the website has already been removed from the server and is serving 500, but still in Google cache. I've requested to remove the cache from Search Console. Will this have a worse impact on my rankings? It sounds like there are a lot of things happening there. So it's kind of hard to say what exactly is relevant there. I think maybe offhand, worth saying that we make ranking changes all the time and we try to make sure that these ranking changes reflect what we think is relevant and useful for users. So it might just be that you're seeing the results of a normal ranking change that we make kind of all the time. The other aspect with regards to kind of staging site, you remove the staging site, and maybe there's still index, that's something where I would make sure that the staging site URLs are just removed properly so that they return a 404 when someone tries to access them, not a 500. A 500 error is a server error. And essentially when we see that, it means that our algorithms will see that as a sign that maybe they're causing an error on your server, and usually they'll slow down with crawling. So that's something where serving a 500 on a regular basis is probably not something that you want to do. Is there an hreflink with no follow attribute and uncrawlable links the same for Google? By uncrawlable, I mean something like with an onclick handler. So we use no follow as a signal in the meantime when it comes to crawling, indexing, and ranking. So that's something where the effects will vary depending on how our algorithms kind of approach that individual link. And things like the onclick handler that you mentioned in the example, that's something where usually we would not see that as a link. So it's not that our JavaScript systems will go through all pages and see where is an onclick handler and what happens when you trigger that. Usually we would not see that as a link. So if you're making a JavaScript-based website, don't use onclick handlers for navigation. One thing that might happen in that particular example that you have there, where you have a JavaScript snippet in the onclick attribute that mentions a URL directly, what will probably happen there is we will recognize the URL in the JavaScript that you're mentioning there, and we will try to see if we can just index that page individually if we haven't seen it before. So it'll be something we wouldn't be able to treat as a link, but we'd find a reference to that URL and we might go off and index that. So lots of kind of edge cases involved there. But if you're using a JavaScript site, don't use onclick. Use normal AHRF links instead. And if you're using nofollow, then just keep in mind that that's a signal. That's not a directive. It's not a way to block indexing of a specific page. You mentioned in Google Search News that only indexed the content that is on the mobile version of the page. It doesn't mention hidden content behind tabs. I think we talked about that just before. My images are ranking good, but my content is not ranking. I have unique content, which is helpful for users. Structured data is good. I'm unable to figure out the problem. Everything seems good and no technical issues on my website. It's really hard to say what you should be doing differently with regards to a website in general. So that's something where I almost get a second opinion and maybe post about that in the Webmaster Help Forum so that others can take a look at your pages and at the queries that you're trying to rank for and give you some honest, raw advice maybe on your website overall. Sometimes it's useful to just get someone else's opinion on things that you've been working on for a while. Can I ask a question? Sure. Go for it. Thank you. It's about schema. I'm just wondering, aside from the obvious schema types like FAQ or event or how-to that may directly influence the snippet, do you still recommend using some of the other schema types? Maybe things like local business or maybe more specific to the different types of businesses or even web page. I think you had a tweet making a joke about if pages are. What page would you not consider a web page? Would you use those schemas as well? Is there any benefit to it aside from SEO maybe? I don't know. So the thing I would primarily watch out for with structured data is if there is a specific kind of visible treatment for that kind of structured data, then that's kind of the direct one-to-one relationship that you're aiming for. Everything else, apart from that direct and visible treatment, is something where it kind of depends on the pages and the structured data helps us to a little bit better understand the content on the page. But it's not something where you'll have any visible effect from that directly. So if you have different cities and you mark them up as cities and locations, then that helps us to better understand, oh, this is a mention of a city, maybe not a brand, for example. But it's not something where there would be a direct kind of ranking relationship there. Right, but you may look at those structured data types to help understand the page just a little bit better. Yeah, yeah. So that's something where usually people are limited by the time that they have available. So I tell them, focus on the visible things. And if you can do a little bit more, that's perfectly fine. It's not a bad thing. I just wouldn't go overboard. It's like if you're marking up every other word on a page, then you're doing a lot of work that doesn't really result in anything useful for search engines. OK, wonderful, thank you. All right, maybe we can open up for more questions from any of you all. I come back. OK. So it turns out that attribute you mentioned was already in there by the Yoast plugin, as we use WordPress. So in the pair all tests, like non-amp articles never ever shown the large news, anything else we can do help there? I don't know. It's hard to say. I mean, if you can send me some examples or maybe copy them into the chat here, I can take a look with the team. I believe it's also the case that we just don't show large images all the time, that maybe we show them as a small thumbnail if we think that's good enough. But if you think you're doing everything right and you're pretty sure that we're never showing the large image, then I'm happy to double check with the team to see if there's something that either we need to document differently, or maybe that you specifically are doing in a slightly weird way that I can get back to you on. Do you have a lazy loading made that effect? It could, theoretically. I mean, it depends on how you mark up the images on the page. I believe, I imagine you're using the article markup for those papers. The image is in there. And the image is probably linked directly in the markup itself, and you probably have the large image linked there. Then probably we would just pick it up from that. However, if you didn't have the article markup and we had to find the image on the page, then lazy loading is something where sometimes we see problems. But it sounds like that's not the case with you. Yeah, OK. I will tweet in that feature a couple of URLs. OK, cool, sounds good. All right, more questions from any of you all. What else did we miss? I could ask one more, John. On our website, our navigation, which is site-wide, has about 10 links. And now two of them lead off to an external partner site, and they are nofollow links. Is this likely to affect our ranking in any way if there's two links with that or not there? No. I don't think that would change anything. So usually we would recommend using nofollow for things like ads or if these are placed in user-generated content on a site. But that's something which is completely normal. Lots of websites have ads, so they tend to have the nofollow links. Or if you're using the newer attributes, I believe that would be sponsored or UGC, for example. But that's completely normal. It wouldn't negatively or positively affect your website if you had those there or not. Even if it's in the site navigation? Even there, no. Sometimes people buy site-wide ads, and using the nofollow is a proper approach for that. Or using rel="sponsored is another way to do that. And that's perfectly fine to have these kind of site-wide ads. That's good. Thank you. All right. More questions for many of you all? No questions? All live. OK, go for it. Pedro, go ahead. I've asked one. I have a question regarding, more regarding, how is the flow of reconsideration requests taking on this period of pandemics? Is it operating at normal scheduled intervals, like the regular? Can we expect the usual two, three weeks response time? Or is it longer? How is it affecting everything? I don't know of hands. But my guess would be that it's slower than usual, just because of the way things are set up when we have vendor teams that review these kind of things. Then probably during situations like this, they have to get reorganized first and work from home and get all of the setups working. My guess is it'll be a bit slower than usual, but I don't have any first hand data on that. Yeah, thanks. Be patient, Pedro. Your spammy sites will come back. It's not me. Sure, sure. That's fine. Your friend. I had just a question about videos. OK. Is there any advantage, disadvantage to using a local player versus YouTube and embedding them from an SEO perspective? I think the main difference, I guess there are two things. On the one hand, we have to be able to recognize the video there is there. If you want the video to be indexed in video search and kind of with the video snippet, we have to recognize that the video is there. With a common player format, if that's YouTube or Vimeo or whatever the common formats are, then that's really easy for us to do. So if you use your own player format and you're using something that you custom-made for yourself, then we might have a little bit of trouble understanding that. If you're using something kind of, I don't know, open source scripts or whatever there are out there, then probably we'll be able to pick that up. The other thing is, especially if you're using YouTube, the landing page of the YouTube video might rank as well in search. And when you're specifically looking at the videos themselves, it might happen that the YouTube landing page ranks above your content or below your content. It's really kind of hard to say. So that might be something where you say, it doesn't matter for us. I don't care which of these pages is ranking as long as people find the video. It might also be that you say, well, my web page is the one that I really need to have ranked. And I'd rather kind of rank a little bit lower than to have a YouTube landing page above mine. And that's kind of a call you can make. Got it. Cool. Thanks. Joe, I have a couple of questions, actually one question. So regarding the Google Search Console API, if I'm guessing giving the current situation, there's probably not a lot of work in adding new features to it. Just wondering if you heard anything on adding the fresh data maybe to it or any other developments. We discussed the fresh data a while ago. I don't know what the status is there with the team. But in particular, with the fresh data, the issue we noticed is we can't just start including that by default because that would kind of confuse the way people are going to use the data from the API. So we'd have to add some kind of a flag or something like that to it. I don't know what the status of that is at the moment. I believe that's something that was not that far away recently, but I don't know how things are different now. OK. And also on the API front, is there any chance there would be a Google Trends API in the near future given that, especially right now, Google Trends is a very valuable resource to kind of see how things are moving along day to day? I don't know. I thought we had an API for Google Trends. Well, it's mostly a hack, a workaround. There's the export option, so people just use the export URL and get URL by keyword by keyword and stuff like that. But there's no official API. OK. I don't know. Sounds like something we should ask for. Yeah. I think, in general, with these kind of things, if you can give us some really good reasons why it makes sense, like you can post them on Twitter or anywhere where it's kind of visible, then that's something that's always useful to go to your team and say, look, all these people want this, and here's why they want it. Like, you should do it. OK, awesome. Cool. Cool. Pedro will help me out, definitely. All right. OK, so I guess we're kind of over time, so we can take a break here. I have the next Hangout lined up for Friday in English and Thursday in German. I think Martin is doing a JavaScript Hangout this week as well, maybe tomorrow or something? Next Wednesday. Next Wednesday. OK, you have to hold out. Until next Wednesday, if you care about JavaScript. And I think we just uploaded a whole bunch of YouTube videos from the Webmaster Conference product summit that we did in Mountain View, so that's another cool set of stuff to look at. In particular, I guess the more advanced people will be keen on listening in to Paul Horace talk, which I thought was really, really cool to see. So stay safe, stay healthy, and hopefully see you all in one of the next Hangouts again. Bye, everyone. Bye, guys. Bye-bye. Bye.