 All right, welcome everyone to today's Google Webmaster Central Office Hours Hangouts on Air. My name is John Mueller. I am a webmaster trends analyst here at Google in Switzerland. And part of what we do are these Office Hours Hangouts with webmasters and publishers like you all here. As always, if any of you would like to get started with the first question, feel free to jump on in. John, can I ask a question, please? Sure, go for it. This is regarding your previous conversation to content being loaded when the page is loaded for tapped content I'm talking about. So you mentioned if the content is loaded only after a click and it stays on the same URL, then you won't be able to use that. So I just wanted to continue the conversation with the GAS2 URLs. My first question is regarding displaying tapped content. Does Google consider a tapped content search via URLs with a hash as irrelevant and does not consider for indexing, even though the content is loaded on the initial page view? And please note, we are not using any Angular. We are just using hash as anchors for the tabs. We don't index URLs with a hash separately. So if the content that you want to have index is loaded when you load the URL without the hash, then that's OK. We just don't index the URLs with the hash separately. So if the content is only loaded if the hash is present, then that would not work. Yeah, so what we are trying to do is load the content initially on the page load itself. But only when the user clicks on a tab 2, then we have hash tab 2. And then when the user clicks on tab 3, it's hash tab 3. So that would be still indexed because it's loaded. That's right. You also mentioned, and I was actually watching one of those old seminars you did regarding Angular. And you mentioned that for tab content to be actually visible in search, it's important that we use static-looking URLs. So do you think if we go down the route of HTML5 push state, is that still applicable as a? What do you mean like the, I missed the first part. So it's important to use what kind of URLs? Static-looking URLs. So instead of hash, if I use HTML5 push state, and then when the user clicks on tab 2, it's actually power slash tab 2. And then the content is loaded. Why is that? Is that still considered a good best practice? That's fine. As long as there is the static link fallback there. So if you have JavaScript that catches that click and you use push state to change the URL, as long as there is also the A element with a link to that URL as well, then that will work fine. Awesome. Thank you. And is there how long in seconds does Googlebot wait for the JavaScript to execute before constring a page fully loaded before taking the snapshot of the rendered page? And when does it start counting it from? Basically, is it time to come back to us by 2,000 seconds or? We don't have a fixed time. So that's particularly because of the way that we cache content. Since we don't need to fetch everything all the time, the time that it takes to render a page on our side is very different from the time that it would take in a browser. So it's not that there is any kind of fixed time that you can rely on there. OK. Thank you. Thank you so much. Really appreciate it. Sure. Hey, John. Just wanted to see if you had any guidance on what to look for for this issue that kind of popped up. We had a topic where a bunch of our visitors have been asking for information about a topic. So last week, we created a page centered around answering those questions. On Friday, when we finished it, we launched it, submitted it through Search Console, almost immediately for a search for the long tail term on that topic. We showed up number 12 in the rankings, which was good. OK, everything's great. And then Saturday comes. And the page basically disappeared from the index to where we can't even find it anymore. And another page on our side, our index page, which isn't really at all related to that topic, comes up in a couple hundreds in the rankings. And we're trying to figure out what kind of problem this could indicate or where we should even look just based on the fact that the page was purely informational, not any kind of anything that would violate any kind of policy. So just wanted to get your thoughts on that. That can be completely normal. So the tricky part there is when we find new content on a new site or an existing site, we kind of have to estimate where we think we should show it, where it's relevant. And sometimes we estimate fairly high. And over time, that kind of settles down. So it could be that it settles down in a similar position. It could be that it just fluctuates for a while and then settles down in a similar position. And maybe it'll settle down ranking higher, maybe settle down ranking a bit lower. So especially with completely new content, that the rankings that you see there, I would expect would fluctuate quite a bit. Maybe, I don't know, I'm just making up a number a week or two until things kind of settle down into a state where we'd say this is the normal ranking that we think is appropriate. So basically completely disappearing would not be out of the ordinary for the short term? That can happen, yeah. I mean, it's kind of extreme, I guess. But it can definitely happen that we index something that we rank it fairly high, that for a couple of days it disappears completely, and then it pops back in. Maybe at the same position, maybe slightly different position. OK, thank you. Hi, John, I have a question. All right, go for it. So thanks for sending out. I guess what's going out right now with the mobile-first indexing is that second batch you kind of referenced, that Brighton SEO? Maybe. Special, yeah. It seems like a lot of people who said they got these notifications from Google Search Console are saying specifically that those are first websites that either are kind of like not their primary websites or they're like desktop only. It's not what they expected to see in terms of the first sites that we moved over. I guess maybe it characterizes my understanding is maybe desktop only websites are being moved over first because there's no alternative, there's no problem with them having two different versions and so forth and making sure the links are equal between the two different sites and so forth. Is that all correct? Well, desktop only is essentially a kind of a unique form of a responsive design, right? It's like the same design on mobile and on desktop, so I could certainly see our algorithm saying, well, nothing breaks when we switch the site over to mobile-first indexing, so might as well do it. So it's not that we're specifically looking for sites that aren't mobile-friendly. It's really just that this is probably one of the safe bets to start with, and that's what we picked there to start with. And I think one of the other tricky aspects there is that we're sending these messages more or less in batches on the one hand to double check that the messages work in the sense that people understand them, that we don't see a lot of confusion around them, and that's something where maybe the first batch of messages happens to hit sites like that, and the next batch kind hits a broader group of sites. I kind of expect some, I don't know, let's see, how can I say it, some non-patterns to be there in the first batch of messages, because it's like such a small group of messages, the sites that you'll hear about are probably just all over the place. So I should have looked into the patterns, or lack of them, that I'm seeing from these people reporting there. So we're not too much into that. Yeah, no, it's not that we're looking for something that's desktop only. It's essentially these are responsive sites. So they're responsive in the sense that the mobile view is the same as a desktop view, and you have to kind of pinch and zoom and scroll around to get all of the content, but it's the same content. It works well on desktop and on mobile, more or less. The other thing to keep in mind is that the mobile-first indexing is not mobile-friendly indexing. So it's not related to whether or not a site is mobile-friendly. It's really just the indexing part that's switching over to mobile, and the friendliness aspect is something that's still a ranking factor in the mobile search results, but it's independent of the indexing. Thank you. Hey, John, can I just ask a real quick follow-up on what we were discussing a second ago? I just want to know, is there any possibility that the fact that that topic has about four sentences and an H2 on the index page about it, that the algorithm might see that as the page to display for that result, and is therefore ignoring the new page? Not necessarily. So that's, from my point of view, the content itself is something that's more secondary. It's mostly kind of a technical matter of understanding how we should index and where it would be relevant to that page. So it's like, if you have a lot of images on there or not, that's almost secondary. It's kind of our algorithms are unsure at first how we should be showing this. We'll pick some place to start with, see how that goes, and it'll settle down over time. OK, thanks. John, I have a question. So this is extremely embarrassing for me, but I'm going to have to air it. So about the algorithm, the 13th, 14th, 15th of April, we're doing our standard 200 to 300,000 uniques a day. Today, I'm lucky if I get 60,000. Nothing has changed on the site. We're the only people who fact check entertainment reporting. We're the only people who have the claim review markup. So that says fact check article. We've, in fact, improved the site while working with the fact check support team. Everything is completely transparent on our site while other sites hide behind unnamed sources. We list our sources. So arguably, our site has gotten better. And our page rank, which used to be outstanding, is horrible. You had to have a hard time to find our stories, particularly on mobile. And we went from building this very sustainable, I wouldn't say big, but sustainable company to, frankly, and I'm saying this publicly, to being on the verge of shutting down, because it's not possible to do this anymore. And entertainment reporting will become the Wild West. The people making up the stories are going to win. There's nothing that we can do, even though we've done everything we can to improve the site over the years. I don't have anything specific with regards to those dates and algorithm changes. But I can double check with the team here to see if there's anything specific that we can let you know about to make it easier to figure out if there's something on your side that needs to change or that our algorithms are looking for differently or if there's something on our side, perhaps, that we can improve there. So I think I still have your site in memory, the gossip cop, right? Right. Thank you. So I'll double check with the team on that. I really, really appreciate that. Thank you. Hi, John, I have a question. All right, go for it. It's just a short question regarding meta descriptions. My understanding was the best practice in previous years was to limit the character limit of the meta descriptions to 155 characters. But I was reading some articles that at the start of this year that a lot of the snippets were up to 300 plus characters in length. So I was just curious what the best practice is right now regarding meta description length. We don't have any public number that we would say you need to focus on this length of meta description. So this is something that can change over time and that probably will change across different devices like mobile and desktop. So that's something where what I would recommend doing is looking at the search results that you're seeing with looking at the top queries in search analytics and seeing if you're happy with the snippets that you're showing there. And if not, then I would update those. So it's not that we're looking at a specific length. Sometimes we'll show a bit more. Sometimes we'll show a little bit less. But it's really trying to match the snippet to the query to help the user understand what this page is kind of about. Awesome, thanks, John. Can I jump in with my question? Sure, go for it. So it's a little more complex, I think, than the other questions regarding pagination. I'm working on a site right now with a lot of data. I'm very, very data. You can think of it structured. The best way I like to think of it is in terms of kind of like a football team or whatnot, you have the team as an entity. And then you'll have the players as, let's say, the subentities. And so the way it works is you'll have a page for the team with all the subentities listed under. And then these pages can go from being only a few pages, so three, four, or five pages. Or in certain instances, because I have a high degree of variability in the data, you can have, at times, where for the entity, you could have hundreds of pages. So I'm using a simple pagination in the sense of realm next and not realm next, but next and previous buttons. But in the case where I have many pages, that's insufficient, because it will take the user many clicks to get to where he wants to go. So I've added also alphabetical links, so A, B, C, D, E, F. So if the user gets, he'll start at A. And if the user wants to go to the player whose name starts with B, he'll click B, and then he'll jump to B in the pages. So also the pages have no page numbers or any sort of static component to them. The pages are completely dynamic in that you'll land on the page and it will display, say, about 40 sub-entities on the page. So my question is, so for the realm next and previous, I'm using realm next and realm previous. And is there a problem with that in terms of having very many pages into the hundreds of pages? And then secondly, the alphabetical links then also link to the same content, but slightly in a different sequence. The ordering should essentially be the same, but the sequence should be different. So you may go from seeing the 20th entry as being the first entry on the page, whereas if you would have gone from the first page and clicked next all the way through, that entry would have been the 20th entry on that given page. Does that make sense? Yeah, I don't see a problem with that. That should just work. So the one thing I would kind of watch out for is to see what are you displaying on those pages? Is this content that's uniquely on those paginated pages, or is this content that links to maybe detail information on other pages? How is that set up at the moment? So what we found is in the past, there was a unique page for each sub-entity, but we found that there wasn't very much content on those pages. So what we did is we just concentrated everything onto the pages. So you'll have a listing of all the sub-entities on the pages, and all the information is there on the sub-entity page. And the pages, that data is only on that page. But the sequence, in other words, the order that those entities can appear for a given URL could be different because of the pagination link structure that I described before. OK, and does the content on the individual pages change over time? Could it happen that some item is added to the beginning, and suddenly everything has shifted down? Yes. OK. It doesn't happen frequently. So it's not like it's being updated on a daily basis. But over time, over months or years, we're adding or removing data from the pages. And hence why we don't want to have, let's say, a numbered page structure because then you'd have to renumber all the pages in the database, and it becomes a nightmare in terms of maintenance. Now, I think that part will probably be the trickier one. So the pagination, I don't see any problem with that, kind of the jump links with the alphabet to jump to a different part within the paginated set. I think that's perfectly fine. That totally makes sense from a usability point of view. The one thing I kind of worry about a little bit is that content that might be, say, on page five, suddenly next week is on page seven or on page three. And then for our systems, it's really hard to understand which of these pages is relevant for that specific piece of content. So if someone is searching for something that currently is on page five, and next time it's on page seven, do we show page five? Do we figure out that it moved on time? Like, how should we figure that out? So that's something that I think would perhaps be a bit tricky. And one thing that might make sense in a case like that is to figure out a way to have maybe some anchors along the list of URLs that don't change. So if that's perhaps alphabetically, that might be an option because the name or whatever it starts with probably doesn't change that frequently. So that might be an option to anchor it by alphabet or anchor it by some other attribute that you have in there. For example, on blogs, on archive pages, it's frequently anchored by a month, for example. So something to make it easier for us to understand this page is really about this topic, and it stays about this topic. It doesn't shift around that this page is suddenly around about something else. And a different page is relevant for that topic, but rather we really have something stable to refer to for that specific topic. That probably makes it tricky to implement, but I think having something where you have more stable URL structure ultimately helps a lot when it comes to search. OK, just a question. I mean, in my mind, logically, it would be the simplest to understand for Google, and I guess for users, would be to have a single page for each entity. And then the list pages would just link to those, and those would then become secondary. But in the past, we've kind of had the impression that going that route seemed to be less not as beneficial as the other way. So it's kind of, you know, I mean, logically for us, it would make more sense to go with a single entity way, but it doesn't seem to be the best in terms of how do you square that? I think that's something you probably have to look at for your site specifically. So sometimes it does make sense to group things, to put them together. Sometimes it makes sense to split them out into single entities. It kind of depends on the amount of information that you have, what kind of content it is, how it changes over time. It's really hard to say. But if you found from a usability point of view that it doesn't make that much sense to have individual pages, then I would find some other kind of grouping that you can do to make it easier to kind of have a stable URL for this content. All right, thank you. Sure. Hi, John. This is Pat. Hi. I have a question specifically about one of the Google Webmaster guidelines concerning the Mobile First Index. And the guideline is make sure your site's important content visible by default. I always understood this as if there is a page about dogs and cats, but dog was the visible content. And the cat content was behind a tab, Google would consider that page about a dog. Now, in Mobile First Indexing, the navigational anchor text is all hidden behind a hamburger icon in most scenarios. And what I was wondering is, how is Google dealing with the fact that navigation and navigational anchor text is hidden behind other content? So specifically for Mobile First Indexing, we do take into account content that isn't visible by default. So essentially, in that scenario where you have dog and cat information and one is maybe behind a tab and is only visible when you click on that tab, then we would still count that page as being about dogs and cats, even if it's not visible by default. And the same applies to all of these menus that you have on mobile, which are kind of behind a dropdown or behind the hamburger menu. Those are items that we take into account kind of as normal content, because we realize on mobile you don't have that much flexibility to put so much stuff on every page in a visible way. OK, and so specifically, thank you. And then so let's say we had a site that only had three navigational items, dogs, cats, and turtles, as their main navigation. You're saying there would be no better to have those three words up since they're small words and they'll fit on mobile. There's no difference between if the anchor text is visible on mobile or if it's hidden behind an anchor text. That's correct, yes. Perfect, that was what I was looking to clarify. Thank you. Fantastic. Cool. OK, let me run through some of the questions that were submitted. Some of these managed to get quite a number of votes, so I guess that's good. Massimilio has a question around news articles where he published something about the Avengers under domain.com slash movies, along with other articles. And now he's moving that to a separate URL. Am I going to lose any historical value from the old articles? Should I 301 redirect the URLs? Will Google fall into confusion having historical news under movies and more recent news under Avengers? These kind of changes do happen over time, and we do try to figure that out. For the most part, I think it makes sense to try to keep the same URLs as much as possible, rather than to keep moving to new URLs. But if you do have to make these kind of changes and you can 301 redirect from the old URL to the new one, then essentially, we can forward all of the signals from the old URL to the new one. The one situation that I've seen which sometimes comes up, which people sometimes run into, especially around yearly content or quarterly content, is when you have separate URLs for maybe an event that takes place every year, so I don't know, some conference. And then you add dash 2018. And that's where you have all of the conference information. Next year, you have that conference dash 2019 with all of that information. Then what happens there is, of course, we have a lot of signals about the older conferences and not much information about the newer conferences. So what will probably happen is we'll start ranking the older conferences instead of the newer ones. So if you're writing about a movie series, going back to this example, and you have different versions of that movie, but you have one primary landing page for that series, then I would try to keep that primary landing page on the same URL and then move the older content onto kind of an archive URL path where you have slash 2017 for last year's version or something like that. So essentially, shifting from one URL to another is fine. If you're doing that constantly, say, on a yearly basis, I would think about perhaps finding a more stable path for the persistent content that you have for that topic. We do a lot of A-V testing on our website to maximize conversion rates. We use a solution similar to Optimize.ly. We have JavaScript on a page that changes UX for visitors. The code triggers only for real visitors and doesn't change anything for bots. Could this cause problems with SEO and be considered cloaking? What's the best way to do SEO-friendly A-V testing on landing pages? In general, if you're just shifting content around and changing text, then that's less of a problem. The one thing we do ask is that you don't treat Googlebot any different than any normal user. So if you have different versions of your content, then I would make sure that Googlebot also runs into one of those different versions. It's no problem if you kind of give users one specific version. If you want to test the persistent state in that version, that's perfectly fine. But Googlebot shouldn't be excluded. So you shouldn't have something in your rules saying if the user is a Googlebot, then always show this version. And everyone else gets a different version. It should be a part of the normal A-V test. And because Googlebot would be a part of the normal A-V test, we kind of recommend to limit the A-V test to reasonable time so that you don't have this constant state of things shifting around for Google, but rather that we can index something persistently and a bit more comprehensively like that. Hello, John. Yes. I have a question for that as well. I have been noticing with clients more and more they're trying to do content that changes depending on what city a user is in. Rather than the country. So I'd really like to make that distinction, not a country. Country distinctions are easy. But when it comes to Portland, Oregon, or San Francisco, or Los Angeles, more and more people are trying to customize their page towards these cities. Is there any clear cut Google documentation or recommendations for people who are doing that? Or is that just a dangerous thing to do? I wouldn't say it's a dangerous thing to do, but Googlebot usually crawls from the US, from California. So if you have content that's visible to users in California, for example, then Googlebot would probably see that content and index that content. So if you have different content depending on different locations, then Googlebot wouldn't see that different content. So my recommendation there is generally to find a way to balance the general content on a page with the more personalized city targeted content on a page so that Googlebot is able to index some general content for that page as well. So if you have, for example, events, then instead of having your home page be indexed only about events for California, you have this general events content that we can index as well and the California content, where, of course, if a user from a different city goes there, they see the general content and their local content. But that general content still kind of stays the same. And that makes it a little bit easier for us to kind of understand what this page is really about, that it's not purely about events for California, but rather about events in general, and there's a bunch of California stuff here too. And usually what happens there is that you link off to city-specific content from there, where you have something like more events in San Francisco, more events in Portland. And because of those links to separate pages that are unique for those locations, we can index those unique location pages separately and say, oh, you're looking for events in Portland, here's our events in Portland page. Right. And to continue this, and I apologize, but we have, let's say there's a, or let me tell you what I recommended. And I just want to see if that squares with you. I said, if you're changing a page so much to where the actual intent of the page is changing, then the better path is to have a city page about that rather than try to personalize it a home page or something. Good advice? I think that makes sense. So if you're really changing the whole thing, then definitely I would switch that to kind of putting the personalized content into a separate place. But doing a mix is also fine, where you can say, well, I have this big chunk that stays the same, which is about my business, what I do, the type of things that we have on our website. And there's this personalized chunk as well that changes depending on the user's location. That's also fine. I think it's something you just kind of have to keep in mind that we do crawl from California. So if you search for your home page, it might be that we see a lot of California stuff on your home page in the search results. So you can have to balance having unique content for maybe California on a separate page where we really know this is about California so that we don't send all users to the home page when they're looking for something in California. That is it. Thank you. Fantastic. All right, so another question here. I work on a website, and around 80% of the product range would be considered non-family safe. The other 20% is considered safe. But anywhere on the site, you are, you're one click away from the non-family safe content. So this is a brand that you would expect to see for safe terms, but they're nowhere to be seen. My theory is that this is due to the overall non-family safe nature of the website. How does that sound? Yeah, I could imagine that something like that is happening, where maybe our safe search algorithms are going, well, overall, this website is non-family safe. So therefore, we need to be a bit more cautious with how we show it in the search results. So the general recommendation that we have in situations like this is to make it as clear as possible to separate the family safe content from the non-family safe content. Ideally, on a URL basis, where we can say this URL pattern is unique to family safe content, and this URL pattern is unique to non-family safe content. So something maybe a subdomain or a subdirectory at least so that we can figure out this part is fine to show in search results for general users, and this part is a bit more tricky. And especially if you're linking between those two sections quite a bit, then that's something that also makes it a little bit trickier for our algorithms to figure out how we should be treating this content. So that's one thing I'd kind of watch out for. And I can certainly imagine that our algorithms, especially when there's this 80%, 20% mix that our algorithms have a hard time figuring out where we could show it in a family safe way. Our page speed is incorporated into mobile, or once page speed is incorporated into mobile rankings, which page speed metric will be used. So we use a variety of metrics. I don't think there's one number that covers everything. So we use everything from kind of real world metrics similar to the Chrome usability report to kind of calculated metrics that we have from various other tools to try to figure out where we should kind of put this site with regards to speed. So my recommendation there would also be to look at different tools and to think about where you have maybe low hanging fruit on your web pages that you can improve and improve things in a significant way. Because depending on how you set up your website, how you create the content, how your server is hosted, some of these metrics might look really good, and some of them might look really bad. And a normal user, when they go to these page, they might have a mix where they have generally high speed or generally kind of lower speed, despite this kind of broad mix of individual metrics. So unfortunately, no specific metric to call out there. The question might be more about, if somebody is accessing the desktop version from Google.com on their desktop browser, are you looking at the desktop speed for that versus somebody accessing Google mobile search or mobile website, are you accessing, are you looking at the mobile page speed? The mobile speed change that we're rolling out, I think, in June, is specifically to the mobile search results. So that wouldn't apply to the desktop search results at all. There is their own desktop version, and there is their own mobile work based on this thing. I don't quite follow. What? Again, so let's say there's no difference if you're the mobile first index or mobile first index and you're not. If you have a desktop-only website and Google is going to show you that desktop site, the mobile page speed is obviously not relevant. It's the desktop page speed. But for example, let's say, I don't know, you have a responsive site, but a responsive site is faster on desktop than it is on mobile because there's, I don't know, less images, for example, on one versus the other. Google is going to use the page speed of which version to depend on if it's being accessed on Google mobile versus Google desktop. Yeah, yeah. It's specific to the mobile search results or the desktop search results. All right, thank you. All right, let's see some more questions. Let's see. We're having an issue with product URLs indexed after redesign of our AngularJS site. The URLs are search-friendly. And when I fetch a render, they render properly with unique content, canonical tags, titles, and other matter information. The body content is unique for each state and URL and corresponds with each product page. Requested multiple crawls and indexes, and the index report is under-submitted, but not selected as canonical. The classification states at the page of the duplicate, and there is no canonical listed. So much information. I probably need to take a look at those unique URLs. So it looks like they're listed in the question, so I'll copy those out and double-check what I can find there. So it sounds like, generally, you're doing the right thing there in the sense that you're testing if these pages are actually being picked up and if they can actually be rendered. And it sounds like we're not quite clear with regards to how we can actually render those pages. So I'll double-check to see what might be happening there. What is more decisive for Google to understand the site architecture, the URL path, number of clicks from the home page, breadcrumbs? Do you think it's a good idea to remove subfolders from product URLs to shorten the site depth? So for the most part, to understand the site architecture, we essentially just follow the links on these pages. So the URL itself is less of an issue. So if you have query parameters, if you put everything into a path, all of that is essentially similar to us. Query parameters allow you to use the URL parameter handling tool, which might be useful in the case of a really kind of bigger e-commerce site. But in general, you can use subfolders, subdirectories. You can make a long path or a short path. That's totally up to you. So it's more a matter of understanding the architecture of the site by following the links on the site to see this URL is related to this one, and this one is kind of a category page, and there are a bunch of detailed pages below this one. All of that helps us to understand the page a little bit better. So breadcrumbs kind of naturally falls into that pattern, and that you have a link to the higher level categories. And that helps us to understand the context of pages a little bit better. But essentially, it's not a matter of minimizing the length of a URL to make it easier for us to understand or not. Hi, John. Hi. Hi. I have a question about site quality as well. We commenced the domain move on the fourth of this month from id.com to .org. And by the 11th of the month, the new site had a manual penalty, a 10-content manual penalty. All we changed was we hid CSS rule for a button and we made the site faster. Also, we changed the offers on all of the content we had. What could be the case? What was going on here? So I suspect the move is unrelated to the manual action. The manual action is something that the web spam team looks at manually. So if they don't have this list of sites that are moving and then they go off and look at those sites, they're essentially going through and trying to find sites that are problematic and then taking action on those manually when they're looking at them. So it's not, especially when you're looking at something like thin content, it's usually not a matter of a technical issue that you have done a move or that it's like a specific URL structure or a specific CSS structure that you have. It's really a matter of the actual content on those pages. Our domain is readincentrary.com. And I believe our content is not thin at all. Yeah, even if the content hasn't changed, then it might just be that nobody from the web spam team has looked at it in a while. And because of that, it's like, well, they notice it now. But even if nothing has changed before, that doesn't mean that it was OK before. So that's something where I'd recommend really kind of taking a step back and thinking about what is actually happening on your website, where this thin content might be coming from, what kind of problems might be from there. Maybe it's something, I don't know, related to user-generated content that you have a lot of low-quality user-generated content that's being pushed there, it's really hard to say. So that's something where I really recommend taking a look at the site, maybe getting some other people to look at the site as well, and getting some opinions on what you could do to kind of improve things overall there. OK, thank you. I do believe that we are one of the better sites for this type of query. But thanks for the input. Yeah, I mean, it's really hard. So I don't know your site. I don't work with the Web Spam Team directly. So it's not that I could be able to say, it's definitely a low-quality site and you deserve this manual action. It's also possible that the Web Spam Team is seeing this as something that's kind of on the edge. And depending on what you have on your website, sometimes it makes sense to respond to the manual action saying, actually, I think you're misunderstanding our site. We have this and this and this, which are really high-quality parts. And it might be a bit confusing. Maybe we can make it less confusing. But kind of talk to the people as well. OK, we sent a reconsideration request on the 12. If we follow that information in it, we haven't heard about friend numbers yet. Is this going to be a time normal? That can happen, yeah. So especially if it's something where the Web Spam Team is not quite sure, where they need to double-check with some other analysts, that can happen like that. OK, thank you. Sure. All right, we saw some good gains from recent Google algorithm updates mid-March. And then it all went undone with decreases from the latest one on the 24th of April. Can you tell us about this? As I thought, the recent one was to reward sites who were unrewarded. So I don't have any specific information with regards to what happened on the 24th of April. We do make algorithm changes all the time. And we try to improve the relevancy of our content as we show them in your search results over time. So this is something where I do expect things to kind of change over time as well. And it's not necessarily with regards to individual sites where we can say, well, this is a bad website. We should show it less. Maybe it's just that we realize that users expect to see slightly different content for some of these queries. So that's something where I wouldn't focus so much on the individual algorithm updates, but rather think about your website and think about where it's relevant and what you can do to really make sure that it's seen as relevant as possible. Maybe getting some input from other people who are running similar websites or running other websites to kind of get their take on it as well. Because if you see that other people are a bit confused and not really sure, then maybe it's a sign that you need to take your website up to the next level. Can I jump in? I had a similar type of experience within the same date range. Except that in my case, during the last meeting, we kind of discussed it. The site I updated during what seemed to be your update during mid-March, I did a full update of my site. Since then, if I look at the user metrics as time on page and all these things bounce right now, everything has greatly improved with the new version. So everything, as far as I can tell, has gotten better. However, I've seen it as of mid-March. In mid-March, I had a big spike of traffic, and then my traffic has been cut by more than half, and it hasn't recovered yet. So my question really is, how much of the drop in traffic is a result, most likely, of what was there previously? Or can I expect that things are going to improve since I've improved the site? Or am I seeing the result of the improved site and somehow being negatively impacted? So especially if you have a bigger website, then it takes quite a bit of time for the algorithms to adjust to bigger changes on the website. So that's something where I wouldn't expect to see fast changes. It's more a matter of maybe several months over which it takes for us to re-crawl, re-index, re-process the website to understand how it has changed, how we need to change, how we show it in the search results. So if you made changes on one day and a week later, you start seeing changes in the search results, and that was also a date that other people mentioned as bigger changes that they were seeing, then probably that's not related to the changes that you made. Usually, you'd really have this multi-month period of things subtly changing over time rather than this one big change. So I should just leave things as they are right now, let it settle out for a few weeks or a few months, and then see what happens moving forward and then start trying to address potential issues. Yeah, I'm always cautious about just saying you should just leave it for a couple of months, because that sounds like, oh, I'm never going to change anything again until it figures out where it wants to show my website, and you always need to be on top of things anyway. So there's always a certain amount of things that you'll be working on anyway. So I kind of shy away from saying, I'm not going to touch anything until, I don't know, June or July. OK. OK, thanks. Sure. Um, let's see. How can we break or delete an info column parameter in Google Search like this, where URLs have been wrong 301 redirected from the subdomain to another language in country? The URLs are in 4.10 since the week, and since the sitemap file seems like Google has memory issues. I don't know, especially what this long URL is. So let me just double check. Really hard to say what specifically you're looking at there. So when I click on that link in the search results, it shows something to me. So something seems to be working. But in general, when an info colon query shows a different URL, that can be completely normal. That's essentially just a sign of us saying, well, these are probably very similar or the same content and just available on multiple URLs. And showing one URL is the same as the other one, because users get to essentially the same content. In this case, I'm not quite sure what you're asking specifically because of the 4.10, because when I click on that link, it shows a normal page, not an error page. So I'll copy that out and double check what might be happening there when I get a bit of a moment time. All right. So next question here. We're looking at how to speed up our page load times, and one way is to load our mega menu through JavaScript that loads immediately after the page loads. My questions are, will Google detect and index the links in the mega menu? Will these links be immediately follow links or no follow links? How can I check that Google are seeing the links and whether they're treating them as follow or no follow? So if you're adding links to a page using JavaScript by inserting the link element, so A elements into a page with normal href attributes pointing at individual URLs, then we treat those as normal followed links. So unless you're also adding a no follow attribute with JavaScript, then those are normal followed links. And if you're adding those immediately after the page loads, then probably we can pick that up from rendering, and we can follow that as well. So for the most part, I don't see a problem with that. I double check with the Defection Render tool to see if we can actually pick those up. One thing you might want to do just to double check that it's actually working well is to have a special CSS version, maybe on a test page, so that you can actually see all of the content that you're adding there. The other thing you can do is use the rich results test in Search Console, or it's separate from Search Console, where you can look at the rendered source code. So you can enter your main URL in there, and then use this testing tool to look at the source code as we render it, which should show the individual links as well. And we'll also show you if they're followed or no follow links. Specific question about my site. In some peculiar search behavior, we discovered that Google was featuring some of our landing pages as video results in the search results, even though we don't have video markup on the page. So some examples here. We've since removed the video as being a video result shortened our meta description and removed any other markup that we had on the page. Why would Google pick up pages in this way? And now that the video is removed, is it just down to Googlebot to crawl the site and recognize that it's no longer there? What do we need to do? So in general, if a page has a video on it and we can recognize that there's a video on there, we might pick that up as a video landing page, because it's essentially a page with a video on it. So that's something where, theoretically, that can happen like this. With regards to removing that, obviously removing the video is the first step. If the video is hosted on your site, you could also block the video from being crawled with the robot's text. And submitting it with the fetch and render tool back to indexing is also a good way to let us know about this significant change that you've made on this page. So that's essentially what I would recommend doing there. One thing to keep in mind is that video and image content generally doesn't change as quickly as normal web content. So sometimes it takes a little bit longer for our results in that area to update and be refreshed to the current state on the web as well. So it's not as quick with regards to indexing as just normal HTML web content might be. I expect that to change over time, but it's just kind of something to be aware of when you're looking at issues like this. Mega menus with regards to SEO, flat navigation, or mega menus for page rank, essentially you can use either or. I would focus on usability aspects over SEO aspects here. Sometimes their clear usability wins that you have with one type of menu over the other type. Is there a certain schema that Google Assistant uses to form its answers for voice queries or does it just learn on the quick answers in the search results? As far as I know, there is no specific markup with regards to Google Assistant in general. I don't know if that will change over time, but at least at the moment we're not looking for some specific markup when it comes to these quick answers in Google Assistant. Let's see, another question here. I am being pressured to insert slash products or an alternate relevant catalog-based identifier into our product URLs, which make up 95% of our large site. This is to facilitate cloud speed, cloud flare speed optimization to help with analytics reporting, plus it should allow us to improve the site spread crumb structure. Obviously, large number 301s will be added to cover this up. I've read that could be up to 25% visibility loss short term. Will there prove speed ultimately result in better visibility, or will amending the URLs be an overriding long term negative? So in general, site structure changes do have short term fluctuations involved. So if you're making significant changes in the URLs on a website, even if you're doing 301 redirect and doing everything else right, so changing internal links, updating site maps as well, you will definitely see some short term effects with regards to the visibility of these pages until everything has settled down into kind of a more stable index state. But this is something that I don't know if you need to make these URL changes. You kind of have to bite that bullet and take that into account. In general, I wouldn't expect this to be something that would have lasting negative effect. So you probably see short term fluctuations. And in the long term, this should settle down in essentially the same state again. And if you have wins from changing the URL structure, like making it easier to have clean breadcrumbs, improving the speed of the pages, better analytics, then obviously you have all of those positives, in addition to being in a similar state as before. So that sounds like it's worth it. The one thing I would do here is just make sure that you're doing these changes at a time where you're not so relying on search traffic, just to make kind of that time where things fluctuate a little bit easier so that you don't kind of have to rely on search traffic during that time where you know that things will be a bit up and down. All right. Wow. OK. I think we're almost over time. Oh, we are over time. Maybe I'll open up to you guys again. Is there anything else that I can let you know about help with? Oh my gosh. Everyone is quiet. OK. That's good. Meet me again, John. I've been talking to the guys in the chat about the same domain. I would appreciate it if you could maybe take a look at it at some point in time and let me know what's what. OK. Do you have also have maybe a thread in the help forum? Yes, I do. I think it'll be easier. If you can just drop that link into the chat as well, then I can copy that out and take a look there later on as well. OK. Awesome. Thanks. Cool. The other thing maybe worth mentioning is that next week we'll be at Google I.O. So if you're keen on hearing about us with regards to search, specifically we have a session on JavaScript sites in search, feel free to tune in there. I think the sessions are all live streamed and we'll definitely be on YouTube afterwards. Maybe I'll have a chance to also set up an in-person hangout again since we're there where a lot of developers are. And we can do hangout like this and with people in the room as well, which is always more fun. All right. So let me copy all of this stuff out of the chat so that I don't miss anything important there. And otherwise, thank you all for joining. Thanks for submitting so many questions. Sorry we didn't get to go through all of the questions. I'll kind of see what we can do there. And I'll also set up the next batch of hangouts so that anything else that's on your mind, you can drop in there. And in the meantime, of course, feel free to reach out to us on Twitter or contact us in the Webmaster Health Forum where lots of friendly and helpful people are as well. All right. Thank you, John. Thank you, John. Thanks, John. Thanks a lot. And have a great week. Bye, everyone. Bye.