 The infinite spinner. Some people will be. OK. Welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller, and with us is Martin Split. We're a Webmaster Trends Analyst at Google here in Switzerland. And part of what we do are these Office Hour Hangouts. We have a slightly different setup this time, which is more like a meeting room, because we're expecting one more person to join. So that'll be pretty cool. But I have all of your questions here, so we can go through those. Or if any of you want, you're welcome to jump in with the first question from your side. Hi, guys. Hi, Ely. I asked a question about the ranking of one of the biggest domains, Amazon.com, and their product pages, which lack the content part, but get the high SERP positions. Can you please give some information on how it really works, or is there a secret why they're getting all those top ranks without even having a proper content on the web pages? There's not really a secret. So I looked at those pages. So you mentioned the query there, and I looked at that. And from my point of view, there's content on there, there are reviews on there. It feels like there's stuff on there that would make sense to show in the search results. So I don't see anything specifically problematic with those pages where there would need to be some kind of a secret Amazon bonus or anything like that. Sometimes really large sites do have pages that aren't really perfect, but the rest of the site is really good. So sometimes it can happen that we rank pages that individually might not be fantastic, but are otherwise on a reasonable website. So that might be happening to some of those pages that you're seeing, but even the kind of for the query that you mentioned, that content looks reasonable. You can buy the product there, like it has the technical details from my point of view. Well, that's basically the case, the certified interview. So technically, there is no technical information. There are like five sentences, and there are short, two-word sentences, and there are five of them. Basically, it's like 15 words of description of the product and everything else is completely links to other PDPs. And in terms of review, there was just one review. That's why we kind of get kind of concerned, because we started working on the platform, then we close our products. But this kind of weird PDPs just simply appear from nowhere. And there is no useful content, because I work in this field currently for the specific products. And that's really surprised when I saw just really 10 to 15 words, and it's not even like, looks like to be a proper description of the product. That's why I asked this question. Maybe I saw a different page, or maybe they serve different pages to different users. But from my point of view, looking at that query and looking at the product, it looks OK. And I think in general, it's always possible to find pages that are kind of suboptimal on larger websites. And when you're competing with a really large competitor like that, it's always going to be hard. So that's something where I wouldn't purely focus on the number of words or the amount of technical information that you have there. If you're competing one to one with a really big website that does a lot of stuff really well, then that's going to be really hard. So my usual recommendation would be to try to find angles that are unique that you can cover that they're not interested in even trying to cover. Also, I wouldn't fall for the, oh, it's only 17 words. So as John said, we might see a different page. But the page that I'm seeing, if I would be a user and wanting to buy this camera, it tells me the video resolution. It tells me the frame rate. It tells me the interface, the measurements, the price, the shipping information. It has reviews. It has answered questions. It has all the manufacturer data. It's not the length of a product detail page. It's the content. And as far as I'm aware, the content pages I'm being served looks pretty decent. I think what you might also want to keep in mind is that we generally crawl from the US. So if, for example, they serve an empty page to users outside of the US, but they serve the full content to users in the US, then we would still be able to index that full content page. And that sometimes throws things off for our side because nobody likes to do things for Switzerland. So we always see empty stuff or you can't buy it. But we would still index that in Google and show it worldwide because we only see the US version because technically that's where our data centers are that do the crawling. And at least my search results is on position seven for that query. So there's a bunch of other pages that are ranking higher, at least for me right now. Something changed during the last week, honestly. I was going to start dropping down. I don't know if it's a coincidence or something else. It was surprising to me, but yes. And you see top two results is our pages, basically. That's a good news, but also bad news. So I was really curious if there is a small secret orages because they have a large volume of pages and links everywhere. Like they have a very high rank for the overall domain. OK, good to know. Thank you very much. Sure. All right. Any other questions from you all before we jump in? Hello, John. Hello, Martin. Hi. Yeah, so I have a client and it is a federal medical center. And all content obviously goes through the doctors before it publishes on the website. So do you think it is a good idea to let users know that information was checked by doctors and also put relevant structure data? What I mean, it's a medical center, after all. And it is a logical assume that you can trust the information. So for example, a page which provides information about a surgery. We can read about a surgery, check prices, and make an appointment. And the content of such pages also checks by doctors because it contains medical facts. So do I need to put information about this particular doctor who checked the page there? I think you don't need to do any of that. But if you're doing something good for your users, then I would highlight that and tell people what it is that you're doing and highlight the value that is behind your content. I think in general, if you're spending time and money to make things better, then don't hide that. Don't be shy with that. I realize with all of the EAT things, people would want to do that especially. But I think just generally from purely a user point of view, if you're doing something good and it's not directly visible on your pages, then users might not realize that you're actually doing that. OK, OK, I get it. Thank you. Sure. OK, let me run through some of the submitted questions so we don't lose track of them completely. Let me just refresh the list so that we get the newer ones. When I select the domain property in Search Console, the property has no disavow tool. Yes, that's true. Some of the tools in Search Console are only available in the older version of Search Console. We're working on moving everything over. But as some things are still in the old one, you still need to use the traditional properties. Usually verification is pretty easy, so that should be less of an issue. With the desktop version, I have structured data, but no AMP version. So it doesn't violate Google's policy because of two different versions. It doesn't violate our policies, but we really, really want you to have the AMP version be equivalent to the normal version of your website. So structured data is usually less of a problem, but especially content-wise, navigation-wise, internal linking, all of that should be equivalent on AMP so that when users go to your AMP page, they're not served a stripped down page that doesn't serve their needs. I know that's something that the AMP team is always kind of struggling with because people think, well, I'm just making a really fast page, and to make a fast page, I will show no information. But for users, that's a terrible experience. And in the long run, that's not going to be good for your site. When the 90 days is up after you've used the URL removal tool, does Google try to re-crawl the URL directly? If so, does it see the robots.txt file? Or is it only when it spiders the site? So the URL removal tool only hides a search result in the search results page. It doesn't change anything with crawling and indexing. So usually what happens is we hide it in the search results and we will continue to crawl the page. And if that page is blocked from indexing, if it has a no index or if it's blocked by robots.txt or it's a 404, then we'll drop it from our index and we won't have to crawl it as much anymore. So it's not after the 90 days, it's more that during this time, we'll work to kind of reprocess that page. So I think the question kind of goes in the same thing and also covers the quirk about robots.txt and no index. So if you have a no index on a page and you block it by robots.txt, then we don't see the no index because we can't crawl it. So I recommend doing either or. If you do both of them, then we just don't see the no index. Question about JSON schema markup for my handmade one of a kind products. I don't have a global identifier and Search Console gives me a warning for not adding one. I refuse to just make one up. So there are two things here. On the one hand, this is a warning. So it's not an error that will block everything. It's basically just saying, well, it would really help us to have an ID here. So if there were multiple versions of this product or multiple people selling the same product, we could group them together, potentially. So that's kind of the one thing. It's not that we would not process it at all, it's not an error, it's just a warning. You don't have to fix all warnings. A lot of sites have warnings with structured data and that's perfectly fine. On the other hand, depending on how much or how many products you're selling, it might make sense to try to get one of these IDs so that you can use that. Especially if you're selling something that other people are reselling, then maybe that makes sense. But ultimately, that part is definitely up to you. I certainly wouldn't go out and just make them up. Apparently, you just register your company and then you can start enumerating your product. So maybe it's not that much of a hassle. I don't really know. But again, it's a warning, it won't break everything. I'm working. Yeah. I wanted to quickly ask you since you touched the products and catalog. Question about the jackpot position. We see that one of our products that we manufacture basically has the wrong name and descriptions just because it was launched or submitted to Google a long time ago. And all other distributors use the same wrong name. Is there a way that we can just feed Google the correct product description, its correct GTI and number, or whatever unique identifier so it fixes the overall index so it fixes the major product information. So it doesn't use the legacy one. No, I don't know. That's something you'd probably need to check with the folks from the shopping side. I don't know who would be best to get in touch with that. If you want, you can drop your information in the chat and I can forward that on to the shopping team. Maybe there is something that they can send back to you. Would be great. Thank you. Cool. I'm working on geographical pages. For UX, I hired a photographer to capture relevant images of the region. I'm wondering how Google distinguishes between new photography and existing stock photography and if this can change search results. So what would definitely happen is we would see these as separate images and we would index them as separate images. In Google images, we would show them individually. They're unique images. Even if it's the same scenery, even if it's a similar view to existing ones, they're new images. The lighting is different. Everything is slightly different. So that's something from that point of view we would treat them as individual things and rank them individually. From a web search point of view, we would not really care what you use with regards to the images. So in web search, we focus on the textual content there. We don't have this differentiator where we'd say, well, this image is stock photography. Therefore, it's bad. We essentially just say, well, there's an image here. There's an alt attribute. We can index the image for Google images. We can double check to see if that's a unique image that we need to index individually. But for normal web search, we wouldn't really need to take that into account. So from a web search point of view, that would probably not change a lot for Google images. It would change things for users. I imagine it would change things as well. With EAT, how much value do external links carry? Surely providing that you're an expert on page is not sufficient. What else matters? Mentions or external links from relevant sites? So we don't have any explicit information with regards to what you need to do there. A lot of this comes from the Google radar guidelines, which are not direct search results or search ranking factories, but rather, this is what we give folks when they evaluate the quality of our search results. So from that point of view, it's not that you would need to gain this through links or anything like that. But rather, this is something that normal people look at when they review the quality of the search results, and which, perhaps, normal users would think about as well. Can I trust this website? So from that point of view, it's not a matter of you need to put these five words on your website and then get a link from this other site. That's definitely not the case. But more a matter of really how you present your website overall and how users would perceive that. My website is still crawling with desktop Googlebot. How much time will it take to change to mobile Googlebot? So we've switched over a significant part of the web to mobile-first indexing, but we haven't switched everything over. You don't need to do anything to kind of force that switch over. Our systems use a number of algorithms to try to determine when a website is ready overall. We do that on a per-domain basis. And when we think that it's ready overall, we'll switch things over. So that's kind of the setup that we have there. There's no specific time where we say, well, this. So from that point of view, you don't need to do anything specific. Hi, John. Hi, John. This is Alex from Greece. How are you? I. Can you hear me? Yes. OK, I have a question regarding this crawling issue with the desktop bot. We had switched to mobile-first indexed on November of 2018. But a couple of months ago, we have switched back to desktop bot. Do you think that is normal? Is this something that you have seen before? Usually it wouldn't be happening. But I don't know exactly what you're seeing. So it's really hard to say. Would you be able to add the URL maybe in the chat? Then I can take a look at that afterwards. Yes, of course. I have made a comment with the whole situation. So you might read my question later in the conversation. OK? I saw your question, but I think you did mention the URL. So I don't know which side it is. But I'm dropping it to the chat. Thank you. Cool. Thanks a lot. Hi, John. Just a quick question here. Yeah, so this is with regard to content duplication. We usually understand that duplication is not on the Google guideline. And I think that's a very, very fair point to put across. But we were facing an issue where, if I could put it, a very unique product was being offered by multiple brands. And we were creating the pages for those brands. So if I think from a content perspective, the technical specifications or the way the product is structured will be the same. The only thing that it will be differentiated is by the brand name that is offering. So how is that content being treated as duplication or how it will be handled? We treat it on several levels, the duplicate content. On the one hand, if the whole page is copied, then that's easy, like the whole HTML page. Then we can see that it's a duplicate. On the other hand, if a part of the page is copied, then we can see that that part of the page is duplicate. And what happens when just the part of the page is duplicate is we will index the whole page. We will index all of those versions. And if someone is searching for a specific part that is only in that duplicated section of the page, then we will try to pick one of those pages to show. So that's, from our point of view, that's kind of normal. Because we can recognize that the same content is available on different sites. It doesn't make sense to show all of the different versions of the same content. Usually what happens, especially with e-commerce sites, is that there are other things involved that help us to pick the right one. So if someone is searching for something local and we can recognize a store as local, then we know it's duplicated content. But this is the local version, so we'll try to show that one. So those are kind of the things that come up there. In general, when it comes to making sites, oftentimes you don't have time to write product descriptions for everything. So it makes sense to try to add additional value through other parts of the page, maybe through reviews on the page. Maybe you have your own product photos. Maybe you have something else that you're doing unique to your website, to unique to those specific products. And highlighting all of that makes it a lot easier for us to say, well, this is actually unique version. We need to make sure that we don't just filter it out because it has a small description that's the same across different sites. OK, question. Say if you wanted to change from language region to just region and you redirect that respective URL, would you lose ranking within that region because you're no longer region targeting? It just goes to the dot com? So if you're using geotargeting, then that would be kind of a generic page instead of the local page. And for queries where we can tell that a user is looking for something local, then that might have an effect. But if you're not using geotargeting or if people are not explicitly looking for something local, then that wouldn't change anything. Sure, thank you. Sure. Hey, John. Hi. Hey, Dan. Greetings from Dublin, Ireland. I have a question for you regarding site maps. So I wanted to get a couple of pages in a XML site map. So I ran a crawl in Scream and Frog and lashed them up to the developers to put on our server so that I could submit it in Google Search Console. But when I got it, it was actually on a subdomain and not on the domain itself. Now it's in the site map index. So I was just wondering, will Googlebot come in, find the site map index, finds that even if it's on a subdomain and the URLs are for the actual main domain, can it find that XML site map and will it crawl those URLs, or does it have to be on the domain itself? If we get the site map submitted individually, then it has to be within the same path. On the other hand, if it's submitted through Search Console and you have the subdomain also verified in the same account, then that would work. So that's the way that site maps works is for unauthenticated submissions, I guess. Like if you don't do it in Search Console, then the URLs within the site map have to be below the path of that site map file. If you're doing it through Search Console, then the URLs in the site map file can be for any valid property within your Search Console account. OK. Thank you. All right. More questions from you, Si. I mean, I have more in this list, so I can continue running through those. But up to you. Do you have JavaScript questions or specific technical questions? Oh, you do. I feel that this is what happens. OK. My site is still being crawled with the desktop crawler. How much time will it take to switch to mobile Googlebot? OK. And take a while. There's another question in there that asks, can I switch to mobile Googlebot? There's no way to opt in or out. We're just progressively changing or moving sites to mobile-first indexing, but there's no way to tell you, oh, yeah, next week it's going to be you. Be patient. It'll happen. It'll be fine. Hi again. Hi. I've asked this a couple of minutes ago, but I think you weren't in the conversation regarding the switch from Google Smart phone bot to the Google Desktop one. We have switched to a Google Smart phone bot on November 2018, but a couple of months ago, we have switched back to a Google Desktop. Do you think that this is something normal? And why this might have happened? Yeah, I don't know. I think you dropped the link. I dropped the URL. Yes, yes, yes. Yeah, I need to check with the team kind of what the status is there. Did you get a notification in Search Console? No notification or whatsoever. And what is more? But for the Switch to mobile, did you get one? We had a notification that we have switched to a Smart phone bot, but never a notification that we have switched back. OK, I don't think we have notifications that we switch back, but it's something. In talking with the team, they generally wouldn't switch sides back. So that's something where from discussions with them, if they switch one side over and it turns out the website does a redesign and then doesn't work so well on mobile anymore, then it's tough luck like we switched over. So I think maybe something weird happened in your case. What is strange is that we are still top performance at Google Search. We are top performance at Google News. Our AMP pages are at top performance, 99% accessibility, say, rankings, et cetera. But and the crawling from the crawling we see at our logs from Google Smart phone bot is at about 70%. And the Google desktop bot is about 30%. That's why we are a little bit skeptical. Yeah, it's normal, but why is it mentioned as a Google desktop? And I can send you the screen search. Yeah, I mean, screen shots would be useful. But in general, when we switch to mobile first indexing, we still have a split of 80, 20, or 70, 30 desktop and mobile. So it's not purely mobile then. And for some kinds of requests, we just use a desktop Google bot. For example, I think the shopping requests are done with desktop Google bot. I think for a new site, that doesn't really matter. But depending on the website, if you look at the overall traffic, then it won't be 100% mobile or even 90% mobile. There will also always be 20%, 30%, 40% still with desktop. But I can definitely take a look at that with the team. OK, thank you. Thank you very much. Sure. You also, I think, asked about the Chrome. What was it, the new? Google Chrome Suggestions, yeah. Yeah, I have no information on that, how that comes together. Because in the Discover feed, that's something where we do use the normal crawling and indexing. And it's kind of separated out, but I don't know how the Chrome Suggestions works there or how you would rank there. And I suspect it wouldn't be a matter of a technical issue, but rather that, I don't know, some fancy Chrome ranking factor, which apparently is now a thing. Yeah, cool. I will double check with that, too, with the team, though. Thanks a lot. Thanks a lot. Sure. All right. We're kind of running low on time. But if any of you want to ask more questions, it's like we have a bunch of you here, so. I have another question. OK, go for it. You mentioned before that the version of the desktop should be exactly the same as the mobile one, in regard of content or markup-wise as well. Because we don't have the exact same template on desktop and the mobile. We detect the user agent, and we are using a separate desktop template and a separate mobile template to be more optimal for the user. It's much better performance if you are using less markup, more optimized images, smaller images, et cetera. What is your opinion on that? That's perfectly fine. OK, thank you. So the thing I would watch out for is with mobile-first indexing, when we do that, we will only use the mobile version for indexing. Like I said, we still crawl with the desktop sometimes, but we use the mobile for indexing. So things like internal linking, we would use the mobile version. So if on the mobile version you don't have any of the navigation, then that would be your problem. If the navigation is there and the layout is different, the HTML is different, that's totally fine. Thank you. One last JavaScript question. OK, so I was just wondering. I know you don't just, it's a little bit close to my previous question, but what's the factor? What's the metric that looks, OK, you look at that. OK, this website is going to go into two waves, and this one is easing. So we see that quite a lot of JavaScript websites are not within two waves. Maybe there is not enough JavaScript or whatever. Actually, we have quite a lot of bots recently to play with that because we're investigating that. Do you want to answer that or do I answer that? I can hand wave. I know that John answered some of the worst questions. So these days, the two waves of indexing play less and less of a role. So basically, generally speaking, you may see a lot of websites that are not using JavaScript that are still going through basically two waves, and you might see some. You might see a bunch of. Wait, wait, wait, wait, explain that. Right. OK, so how do I put this? Basically, every pretty much, no, so here's the thing. Pretty much every website, when we see them for the first time, goes through rendering. So there's no indexing before it hasn't been rendered. And there are certain heuristics that if we see after a while, like, oh, this page, actually, the renderer does not diff as much or doesn't diff. Like, it looks the way before, like, we get. So what happens is we do a crawl, right? We do a crawl, which means, yeah, let's say you get a new domain. You learn how much CPU this new domain was taking. No, that's not what we do. What we do is we do an HTTP request. We get something back, right? Some HTML. Maybe it's a bare-bones HTML, and all it does is to load some JavaScript and run the JavaScript. Then this HTML that we got from the original HTTP get request from the crawl goes into rendering. Rendering lines to JavaScript, boom, a lot of content happened that wasn't there before. So we're like, aha, OK, so this needs to be rendered. But there is a heuristic that is very, very. You look at the difference between the initial HTML and then after rendering, you see extra content. And the interesting thing is that, so what I want to make very, very clear. Because I talked to the team, and I was surprised about this. I thought this is still a lot more frequently happening that we are going like, oh, all right, we're going to skip rendering. It is not as frequently happening anymore. So for many, many websites, even if they do not run JavaScript, they might still go through the render phase, because it doesn't make a difference as much. Because it's cheaper. It's cheaper than the complexity that we infer. So there's very, very few cases. And the internals of that are very complicated. And I still haven't fully grasped what exactly triggers the heuristic. Because what we see, there are quite a lot of JavaScript websites that never go through two waves. And there are some websites that go through to the wave. Again, we don't see really a difference. So one of the factors for you is the difference between the initial crawl and the rendered DOM and crawl DOM and render DOM. And I wouldn't say that two waves of indexing are dead, but it's definitely something that they absolutely not. But it's definitely, I expect eventually rendering, crawling, and indexing will come closer together. We're not there yet, but I know that teams are looking into it. No plans, no deadlines, no roadmaps to be announced yet. But you winked twice. What about link juice, Martin? How does link juice play into a row with rendering? Just to finish that, because I have this concept in my head that JavaScript S series is dying slowly. It's going to eventually dissolve because you guys are getting better with that. So basically, you are on the path of just killing two waves, eventually completely, right? Can we? So I would say that we're hoping to make it all a little bit better, and so that you don't have to do more things. But kind of like with normal technical SEO, I don't see JavaScript SEO dying. Because there's just so many things that you can do wrong, and it takes a lot of experience to be able to debug and find out and improve things. And it keeps changing, right? There's new stuff coming in. And with every new bit on the web platform, you're like, does this work with Googlebot? So this is interesting. Sorry, sorry, sorry. But I want to follow up on what you said. So you're saying that even if you guys get so good with JavaScript, obviously, basically resources, and I'm guessing some kind of technology that you used to optimize that, you still think that JavaScript SEO is going to be? In two years or three years, you still think it's going to be a thing? Because I had this concept that it's going to dissolve. I mean, it's going to be a thing in that it will be better at it, but there are always technical details that you can get working well or get working kind of as well. Kind of terribly. Oh, yeah, with JavaScript. With normal technical SEO, it's already hard. And it's something that a lot of people struggle with, where it's like the internal linking and you have new unique URLs and all of these things. And with JavaScript, it's all hidden away, so you really have to know how JavaScript works. And when something goes wrong, that's really not going to be trivial to find. And new frameworks, like new elements in Chrome, all of these things, they kind of come together. But my logic was that you guys are using, sorry, the latest version of Chrome. Yes. So with new frameworks, they actually work when the latest version of Chrome. So eventually, it's going to be one-to-one. They are still yes and no. Is it like naive thinking here? It's a little naive to think that, because the thing is that you're in the end still not. It's not that there's a human being sitting in front of it looking at your website and going like, huh, OK. It is a technical infrastructure. No, I know. There is technical infrastructure. And there are so many interesting implementation details that can interact with the web platform in interesting ways. To give you a very simple example, what we're doing with web components, I'm writing the guidance right now. So excuse me if I'm not having a very polished answer at this point. You get the raw answer from me. Well, web components work fine in Chrome. We have the latest version of Chrome. Chrome 76, as of today, actually a couple of days ago, in Googlebot, that's fine. The thing there is we have to make a decision what to index. So as the user, depending, let's say like if I go to a website that has a web component and there's something in the shadow DOM, then I see the shadow DOM content. If I would run an Internet Explorer 10, I see the light DOM content, which gets overwritten. So some people might be like, oh, yeah. So if I have a fallback for crawlers that do not understand JavaScript, I think I'm going to be in the first wave of indexing first. I put my fallback content into the light DOM. But then Googlebot never sees that. So that's still something that you need to know and be aware of. So you might end up with some person coming to you and going like, this content is there. It's in the DOM. We don't understand why it's not showing up. And then you have to know, oh, that's because of JavaScript specifically, because of shadow DOM. The shadow DOM overrides the light DOM. And the way that the Googlebot works is it flattens the shadow DOM into the DOM overriding the light DOM in this specific case. So to make my point from earlier, JavaScript SEO is not going to go away. It's going to change. It used to be a still much more technical. That on one hand. And the thing is like, today, JavaScript SEO is about finding the pitfalls and the gotchas in today's technology and working around them or figuring out a better way to do them. And in the future, it's going to be more like, this is what can go wrong. It works out of the box. But these are the things that can still go wrong. And these are the things that we need to do to debug them. Because we got a little bit geeky here. So just to summarize that, I know you are trying to simplify that, but just to simplify that even further. Basically, what you're saying, in the future, JavaScript SEO is going to evolve into making your job, so Google's job, a little bit easier and making sure that everything we push out to clients is basically very easy to crawl, index, and understand. I think that's one thing. But also all of the troubleshooting stuff kind of comes in that. And that's something where we can provide some tools to help. But things like Shadow DOM, Light DOM, it's like, how are you going to figure that out unless you already know that this is a thing. Or things like you're using Canvas to put content out there. And we think, oh, Canvas is an image. So we index it as an image. That's a bunch of consulting. But right now, it's more figuring out what's going wrong, probably, and helping troubleshooting. And it's going to turn more into there's 10 ways of doing this in JavaScript. Nine of them are terrible because developers are trying to figure out the right way. That's one of the reasons why I want developers and SEOs to sit on the same freaking table. Because developers are like, OK, so this is really hard for us. This is making everything slower. And they are not necessarily thinking about, can Google index this? Or can Search Index Engine see this? Because I've found out that for a while that a lot of developers in the SEO sphere, JavaScript SEO, is just advising developers to pre-render. I was waking up at night crying. I mean, I think that was a good first step. What's happening? I think that was a good first step. Good first step. Three years ago, maybe. I'm sorry, but I know the huge fun of doing that. I mean, it's also one of those things where in the first step, you have to know what the limitations are. And when you know the limitations, you can kind of work around them. And if you have a website and you need to get an index, you can say, well, Google will figure it out in a couple of years. That's not a good business model. I was actually completely looking at how quickly you catch up to technologies, how well you were doing compared to like a few years from a year ago. I had this vision with my head, OK, in one or two years. That's warm in my heart because you're one of the few people who say we quickly catch up. Oh, we're going to publish that quickly. We're going to publish that soon because the heavy lifting you did with indexing is tremendous. Because just two years ago, you couldn't index six pages of JavaScript when the link was nested in JavaScript. So, yeah, sorry. So basically, you're saying it's going to evolve into being much more complex. I actually, sorry. Sorry. I got excited. I like that idea. It's going to be technical. OK, let's take a break here. I'll pause the recording before we fill up YouTube. I mean, I think we could go on forever. And probably there are a million questions from your side as well that you want to shove in. But we have to take a break somewhere. So I'll stop here. I'll set up the next sessions. You're welcome to drop the questions there. Or of course, ping us on Twitter or post in the Webmaster Help Forum. Folks are really helpful as well. With that, let's take a break. And wish you all a great weekend. Thank you very much. Have a great weekend. Bye. Now if I could find the stop button.