 All right, welcome everyone to today's Google Webmaster Central Office Hours Hangouts. My name is John Neuler. I'm a webmaster trends analyst here at Google in Switzerland. And part of what we do is these Office Hours Hangouts with webmasters and publishers like the ones here in the Hangout and the ones that submitted a bunch of questions already. So as always, if one of you who are kind of new to these Hangouts has any questions on their mind that they've been trying to get answered, feel free to jump on in, ask the first question. Otherwise, we can just get started. Hi. OK, we have one. OK, so we currently work on an Oracle APG system. We're new to it, so I don't know much about it, but basically obviously, it's an e-commerce system, so it uses a lot of query parameters. I've tried to block as much as I can with the Webmaster text. Our static pages uses query parameters as well. And what I'm finding is in Google Search Console, is it's actually being blocked. You're all parameters blocking the query parameters for the static pages as well. I can't delete that. I can only reset that. What would you advise I can do then? OK, so I'm not quite sure I understand the question correctly, but it sounds like you have URLs with a lot of query parameters. And some of these you're trying to block on purpose, and some of them you want to have crawled normally. Yes. So what I would do there for the most part, if you can, is let us crawl all of those versions. So don't block them in the robots text. And instead, use the rel canonical on the pages. Use no index on these pages. Use the URL parameter handling tool in Search Console to let us know which of these versions you want to have blocked and which of these versions you do want to have shown in Search. So with the URL parameter handling tool, it's sometimes a bit tricky because it's easy to break things there. So I would double-check that you kind of let us crawl those parameters that are actually relevant. OK, so I'll get exactly what you're saying. Just to give you an example, I'll place it in the check. So these pages are like a page that sweeps off, but I don't want that page blocked. OK, so I haven't blocked it in the robots text or anything. But if I look in my Search Console in URL parameters, it's actually blocking it there. It's excluding it there. So it's excluding page name. Yeah, so in the URL parameter handling tool. So what I noticed when I go to that URL is it redirects to a URL with a session ID. Yes. Which I am guessing is one of those things that you're trying to block. Yes. But I don't want this page blocked. Yeah, but if it has, so if it's redirecting to a session ID, then that's something where we would see that redirect. We would try to follow it. And if that redirect target is blocked by robots text, then we wouldn't be able to index that. So that's kind of a problem there. And I see the session ID is added with a semicolon, so not as a normal URL parameter, which makes it even harder for us to understand that this is actually not a part of the filename but a separate parameter. So ideally, things like session IDs you would use as normal URL parameters in the kind of behind the question mark of the URL so that you can use the URL parameter handling tool to clean that up. So as it is now, it's really in one of those states that it's really, really hard for us to crawl and index properly. OK, thank you so much. Sure. Hi. So I'm having a question. So we have a few near me pages. So those pages are basically detects the location from the server. So Google detects that location from Polo or something like that. And Google only indexes those pages only instead of, because that page is a single page for all the locations. So that page will serve the content instead of having some subfolder or something in front of that. So how can we solve this issue? Because those pages are also having breadcrumbs city-wise, like for Delhi, NCR, and Canada, Toronto. Those pages are having breadcrumbs as a city name. So either we remove the breadcrumbs to show the user that, because the users will be seeing that the Polo Alto CEA or something like that in the subfolder. So did you get my question? I didn't quite understand what you mean. So you have breadcrumbs with the city names? Yeah, I have breadcrumbs with city name. Like the pages are near me, like something near me, something near me. And the near me searches are having great search volume. So we have created the pages for the near me. So basically, those pages are having breadcrumbs. After the city name, if someone is searching that restaurant in some location in the city, so the person will be able to see the breadcrumbs at that location. But the page is a single URL. Page is having a single URL for all the locations. But the breadcrumbs are different. OK, so you have one single landing page for all of these different locations? And it automatically detects the location and shows the content rendered about it. I'm not sure if I understand it correctly. What I would recommend doing there is maybe starting a thread in the Help Forum with an example page. There's just to me that you can remove the breadcrumbs. There is no other solution. I think Grove only has just made this solution, I think. Yes. I don't know. I don't quite understand the exact setup that you have. OK, so I'm having one more question. So after the thread update, I can see the increasing number of impressions. But the number of clicks are the same. But the number of impressions are much higher now. Maybe you do the rich cards. Do each rich card count as a separate impression on the search result? So for the impressions, we count if one of the URLs from the site is shown in one of the search results pages. So that's something where we don't count differently, depending on where on the page the user is scrolling. If it's shown on the search results page, then that's counted as an impression. So those pages are like AMP-pages rich cards. So basically, both are separate URLs, I guess. Yeah, exactly. So can you tell me a little more about this thread update? Is this official update or something? This is just one of those updates that we make all the time. So there's nothing really specific that we would have to communicate about that. So why are you not disclosing this at this time? We tend not to disclose most of the updates. We make them like hundreds a year. So for the most part, it doesn't make sense to say, oh, we made a change today. We made a change yesterday. We made a change today. We made two changes yesterday. That's not something that, from our point of view, really makes sense. So a few popular websites are showing content on the desktop version of that search page. And in the mobile PWA pages, they are not showing any content for adding keywords to that page. So they are adding some content to the desktop pages, like some content for Wikipedia and something. But in the PWA pages, they are not having any content. So can we, is this a good approach that we add content on desktop pages, but we don't put that content in PWA pages? That sounds like a bad idea. So especially if you're copying content from Wikipedia that doesn't provide any value. No, if content is unique, let's say content is unique. But the version is different, like a web page, a desktop page. We are having some content, 200, 300 words, unique content describing the city. And in the PWA page, we won't show any content because it will potentially having a bad user experience. I think that's a bad idea. So if you have something that's useful on your desktop page, then it's useful on the mobile page. A lot of people are mobile only now when they use the internet. And if you suggest that something is available on your web page by showing it in search, and it's not available when they come with a mobile device, then that's a really bad user experience. So that's really something I'd avoid doing. And is this a good approach to use emojis in the title that's to improve the CTR? I don't think that really boosts the CTR. I mean, some people do that, but I don't think it really helps that much. OK, thanks. Sure. All right, let me run through some of the questions that were submitted. And then we can open things up for more general questions from you all as well. OK, so someone created an infographic with a big question in it. In general, I'd recommend trying to kind of simplify things and just writing the question into the text. That makes it a lot easier for us to kind of go through and understand what is actually happening here. In this case, the question is about a website where basically a 15-year-old website or domain was bought and they're trying to put content on it. And they realized afterwards that actually this website was used for adult content in the past. And now they're kind of wondering, what's up? What do we do with all those crazy links that are pointing at my website? So from our point of view, if you want to clean things up with a discipline file, that's totally up to you. That's something that you can do. In general, if you put new content on a website, then you'll treat it as a new website. So we'll try to handle that correctly on our side. If you're seeing that your website is being filtered by Safe Search incorrectly, in this case, perhaps, maybe your new website has nothing to do with any adult content, and it's still being filtered by Safe Search, then that's something where there is a form in the help center that you can submit to let us know that you think this might be a problem. And then someone on the Safe Search side will take a look at that and say, oh, it looks like our Safe Search algorithms just haven't caught up with this. Maybe we can just fix this for this website. So that's kind of the options that are available there. In general, if you're buying a used domain name, I would recommend looking into the history of that domain name ahead of time, because that can save you a lot of hassles down the road. So that's kind of one thing to keep in mind, just like when you buy, I don't know, or rent a place for your business in the city, you probably want to take a look at that place and kind of think about the history of that area, the general area where you're placing your business and think about that really makes sense for you there. And sometimes it does make sense. Sometimes you have a really neat opportunity to pick up a really cool domain name and you kind of are willing to take into account that it might take a bit of time and a bit of extra work to actually get things up to speed, but then you have this really cool domain name that you're kind of proud of actually using. All right, let's see. On our site, we offer different types of services for online booking. One of the services is a third party affiliate program. Could this be a problem for our website? In general, affiliate programs are perfectly fine. It's not something that you need to hide. It's not something that will automatically make your website look lower quality. We do, however, expect that you provide something of value in addition to just that affiliate link. So instead of just copying, pasting the same content as all of the other affiliates are using, really think about what you can do on your website to make your website unique and compelling so that Google wants to recommend your website instead of all of these other ones. So don't just be kind of as good as the others, but really make sure that you're providing something extra, something unique that kind of stands out from your side. So that's kind of the direction I'd head there. Wander how does Google decide whether to announce a ranking boost or change? For example, the mobile interstitial, the HTTPS, mobile first, et cetera. So this is something that, from our point of view, is sometimes a bit tricky. But sometimes it's kind of obvious as well in the sense that if there are things that we can clearly tell webmasters what they can do to improve things on their side, then we'll try to do that. So for example, when it came to the mobile friendly ranking change, which was, I think, like two years ago, that's something where we recognize that this was a big problem, that a lot of sites were not mobile friendly. And we wanted to make a change in the search results in that regard. And to make sure that sites have a chance to kind of understand what is happening and to kind of make that right decision as well, we wanted to kind of spread that word and say, hey, we're doing this. You can do this on your side to kind of make sure that you're in line with this. And in general, the recommendations we have are not just for search. There's something that are relevant and valuable for websites across the board. So the mobile friendly change is something where, of course, we can take that into account in the search. But if half of your users are coming from mobile devices already, then that's something you want to take into account as well and kind of handle on your side too. When a website is penalized by Google, does it get demoted, the indexed, or both? So both, I think, wouldn't really make sense, because if it's removed from the index, then it doesn't matter if it's demoted as well, because it's not shown in the search results. In general, we do try to take the appropriate action when it comes to manual actions. I believe there is a website that we put up How Search Works a while ago that goes into a little bit like the type of manual actions that we take and the type of situations that we run across. So we don't remove websites just because we think there's something small that's wrong with them. In our ideal situation, what we try to do is make it so that whatever a website is trying to kind of sneakily do just doesn't have any effect. So one really easy example is keyword stuffing. That's something where we see a lot of websites trying to do that, or at least over the years have been doing that traditionally, where they think, oh, if I put this keyword on my page, so many percentage of the words, then Google will see my page as being more relevant for this keyword. And the web spam team could go in there and say, oh, you're trying to do keyword stuffing. We'll remove you from search completely. But on the other hand, we could go in there and say, oh, you're trying to do keyword stuffing. Therefore, we'll just make sure that your site ranks normally, and we can ignore that keyword stuffing that you're doing there. And that's kind of the steps that we're taking there, where we're trying to recognize the issues that are being done on a website and trying to find ways that we can kind of ignore that. Because we know a lot of websites are not explicitly trying to game Google's system and trying to be really sneaky and do crazy black hat stuff. They're essentially just following bad advice that they got somewhere. So we want to rank a website based on the good things on it, rather than just kind of remove it for any small thing that they do wrong on the website. Our website suffered a penalty yesterday. All the keywords dropped from Google Search. We received yet another DMCA removal request the same day. We handled these complaints on the same day. What action could we do? So in general, this is probably not directly related. So there are some things that we do for in situations where we see that a website gets a lot of DMCA complaints and has a lot of problems with content that is taken from various other sources. But in general, this is something that's more of a long-term situation, where when we see that this is happening over and over again, then we might take action on that. Or rather, our algorithms might take that into account. So if you're seeing one complaint coming in and the next day you're seeing a change in search, then that's probably not related. That's probably something completely disconnected where maybe your algorithms are just seeing your site differently in a slightly different way. And it's not related to this individual DMCA complaint. With regards to DMCA complaints, these are legal issues. And this is not something that we'd be able to handle for you in the forums or even if you emailed us directly. This is really something that's kind of handled on a legal level because it's a legal kind of tool that's used by lawyers in general. I'm trying to remove some web pages from my website. I've foretened all of the pages. It was over a month ago. And I used the URL removal tool as well. Now they're kind of removed from the search results. But I want to remove them from the index as well because I think they're causing a thin content issue. What can I do to remove these pages faster? So in general, this is not something where kind of forcing a removal from the index will probably change things significantly. So if you're looking at a bigger website, I don't know if this is the case. But especially for bigger websites, if there is a handful of thin content pages that you're trying to remove, then that's not going to sway Google's view of the website overall. So that's something where obviously it can make sense to remove pages that you think are kind of useless to your website. But that's kind of independent of the bigger view of your website overall. So from my point of view, I would leave it like this in that the URLs are already removed from the search results. You're returning 4.10 for these pages. So they'll drop out of the index over time anyway. That's something that will kind of clean itself up there overall. But I would kind of take a step back and look at your website overall instead, especially if you're thinking that maybe there is a quality issue with the website and try to see what you can do to significantly improve the quality of your website overall, past just removing a couple hundred thin content pages. If you use robots.txt to block a subdomain, do any of the links to the subdomain have any ranking value for the main domain? Or are they ignored? So in general, any links that go to a URL that's blocked by robots.txt, they end up on that URL. And it might be that we try to index that URL without the content in the search results. So none of those links kind of get forwarded because we don't know what happens on that URL. On other hand, if you let us crawl that URL and you have a rel canonical to one of your other pages that you think is the same content, then that's something we can forward those signals to that URL. Or if there's a redirect to a cleaner version of that URL, then that's something that we can do. But if you block it by robots.txt, then that blocked URL collects the page rank, collects those signals, and it doesn't forward anything. When I make a change to my website, it usually rises a few places, but then it always falls back down. It happens so often that it leads me to think that some part of Google's algorithm hates me. What can I do? So I'm pretty sure that no part of Google's algorithm hates you. Google's algorithms are generally trying to be pretty objective when it comes to a website. And they're not specifically tuned for kind of like messing with individual websites. So that kind of said, that's something where I suspect what our algorithms are doing is they're recognizing that you're making changes on your website. And they're assuming you're making great changes and doing great things there. Maybe they're kind of expecting that of you already. But over time, they're recognizing that maybe it wasn't as great as they thought initially. So maybe there are things that you can do to kind of significantly move your website up another notch rather than just incrementally adding a few things or tweaking things on your website. I think that's a really hard situation because obviously it would be easier to just open the text editor and tweak a couple of meta tags and or change some lines of text on the page. But sometimes it's really worthwhile to take a step back and think about what you can do to significantly increase the quality overall. And that's something that will take a bit of time to actually be reflected in search as well. So that's not something where you publish a new version of the website. And the next day, everything ranks 10 places higher. That's really going to take a bit of time for everything to kind of settle down and be recognized as being something significantly better. Are ID products OK in URLs besides the keywords? What's the max URL length or the ideal one? So I think ID product kind of refers to the IDs and query parameters in the URL. And from our point of view, that's perfectly fine. That's a fantastic practice. Using parameters in a URL is something that works really well for search engines, actually. Because what happens there is that we can try to understand those parameters and handle them appropriately when it comes to crawling and indexing. Whereas if you put everything into kind of the path and the file name of a URL, then that can make it really hard for us to kind of crawl and index those properly. So kind of like the case I mentioned, or we talked about way in the beginning, where a session ID was placed essentially into the file name. When it's in the file name like that, we essentially can't take that session ID out and crawl the URL without that session ID. So using query parameters in a way that really makes it a lot easier to kind of pick that up. And with regards to URL length, there is no optimal URL length. So I believe we support URLs up to like 3,000 characters. And that's a pretty crazy long URL. So that's not something that I'd recommend. If you have a second domain, would you redirect to your main site? Does there ever become a point where you should remove the redirects as PR values of those pages will diminish over time? And therefore, the old pages will lose value. You don't want to associate low quality with your main website. So from my point of view, from a search side, there is no reason to remove that redirect. I think that's perfectly fine to keep if you can do that. Sometimes there are practical reasons where you say, oh, I can't keep redirecting. My server people are complaining about this redirect being in place for like 10 years now. Then for practical reasons, you might say, OK, after a couple of years, I'll take this redirect out because nobody even visits this redirecting URL anymore. Therefore, it doesn't make sense to keep that redirect. But from a search point of view, you can keep that forever if you want to. I assisted a few publisher sites in HEPAS migrations, but there seem to be some ranking issues. I'm not sure if it's down to algorithm updates, negative SEO, or the migration itself. I'd like to ask some questions. I don't know if you're here in the Hangout. I'm here. All right. Go for it. Yeah, John. Yeah, John, thanks for having me here. Well, firstly, my question is about top story section of Google search. And there was a lot of change there over the last couple of months. So the top story section is still algorithmic, as you've confirmed, and not just limited to better publisher sites. So many websites that have previously appeared in the news are having issues appearing for top stories now. I understand that there are filters to the inclusion to improve the quality of search. I've seen many of the other websites reported to have been suffering from the same problem. Upon examining those, I was able to spot major drawbacks. And that could potentially affect the search engine visibility. I mean, some, they weren't even mobile friendly. And you're not going to be there when, obviously, over 50% of the searches that you get if they're from mobile devices, obviously. But also, I must say that there was some worth very informative content and that has everything. There are few particular sites that I'm finding it really difficult to see any issues at all. Furthermore, there are additional measures, apart from just being, let's say, mobile friendly and fully secure. There's some real thought put into mobile design, as well as the desktop design, being fully responsive. The content definitely doesn't seem to be an issue. Obviously, I can't say how quality it is from Algos point of view. No SEO at all. So it's natural SEO. And that's why reputable sources, worldwide standards, they've always linked to the source, are being in contact with other webmasters and SEOs. And they've all agreed that there was a problem. None of them were even able to name the issue to say, this penalty, it could be this or that. They weren't even able to name the issue. But they've all agreed that there was an issue. So I think in the last few months, at the summer times, we've spotted about over 300,000 links coming from just nowhere. And they were using an intermediate link to, let's say, 10 pages. So about 4,300, 4,000 links to 10 pages on this website. Could this really affect the website? Because the loss of visibility was about 90%. So specifically with regards to the top story section, from our point of view is a organic search feature. So it's not something that's tied to Google News. It's not something where we'd say you have to implement a specific markup on your pages to kind of be shown there. That's something that's kind of algorithmic ranking. So sometimes that can mean that sites are shown there. Sometimes it can mean that some of the sites are not shown there. I'm happy to kind of look at specific examples and pass those on to the team. I don't know if I'd have any specific feedback where I'd be able to say, oh, the site is not shown in the top story sections because of this or because of this. But these examples are sometimes useful for the team to kind of fine tune their algorithms to make sure that they're picking up the right signals and that they're not dropping things that could potentially be really useful to users. So if you want to send me some queries and screenshots and sample URLs that you think should be in there, then I'm happy to forward that on to the team. That would be great because it has been months now and we've checked pretty much everything and can't just find any issue. It's not just SEO related, but I just can't find any issues. And we keep on concentrating on things that just doesn't make sense and just little issues. And that's not going to make any difference, but I'll send it over. Yeah, so I don't think the link side has anything to do with that. I suspect that's just the way our algorithms are looking at the site overall. But I'm really happy to forward these on. I've heard from a handful of people with regards to top stories changes. So if there is something more specific that you can give me, then I'd love to pass that on to just make sure that the team is aware of these issues and has some examples to look at with regards to what we could be doing differently. That would be great. But the second question, this is going to be a quick one. Apart from, actually, this is Google Search Flash, I will say, Google News. But apart from removing inline CSS within the page, is there anything that we can do to help the bot actually fully have a full and healthy article crawling? Because otherwise, sometimes it just takes half of the article and it says, OK, finish crawling. This is it. So when we remove the inline CSS, we've realized that most of the time it helps with crawling the full article. Is there anything else that we can do, maybe, apart from doing that and cleaning the code around to help with, you know? Yeah. So usually, this is more of an issue with regards to Google News, where we do try to extract the body of the article and we'll flag things like article too long or article too short. So if you're seeing that for individual pages, I'd recommend going through the News Help Center, where there's a contact form that you can use to contact the News Team. In general, as far as I know, the News Team tries to extract that from the template of the page. So the cleaner you can keep your template, the easier it will be for us to kind of extract that. Whereas if it looks like there's a comment section in between or there's suddenly this big block of, I don't know, an image or an active element in between, then that might make it hard for us to extract the actual contextual content around it. But that's something where I'd also take those specific examples and ping the Google News publisher team directly, so that they can take a look at that. OK, thanks so much. All right. Let me run through some more of the questions here. I'll open things up a bit for everyone else again. How would you define thing content? For example, are we talking about thing content when looking at a page, which only function is to link to other sites similar to a sitemap HTML page on a web page? Does Google see that as a doorway page? So in general, I don't know. It's hard to define thing content on a fly like that. But usually I look at this as something where there's not a lot of value being provided on the page with regards to if I were to stumble across this page in the search results and I clicked on it and it takes a bit to load, and then it comes up like that. I'm like, ah, why did I end up here? There's nothing that really helps me here. Then that would be kind of like thing content from my point of view. And sometimes that's a lot of textual content on a page, but it's not a lot of unique value that you're adding there. So just linking to another site might be an example of that, having a lot of cruft on a page, and then just like this one small section that's actually useful, that might be an example. I believe in the Help Center we have more of an official definition there, though. Newly approved site in Google News. How long does it take to get indexed? So I don't know specifically with regards to Google News if they have anything special there, but in general, you need to have content on your website before you're approved for Google News. So that's something where your site should already be indexed in normal web search before it's actually picked up by Google News. In general, for new websites to show up in Google web search, that's often a matter of a couple of days I found recently. So it's actually fairly fast. If you submit a sitemap file with a link to URLs, then we can generally pick that up fairly quickly. How should I list my software to this enhanced search results? So the software, I looked at this, I think, for one of the previous hangouts as well. There is a specific markup that you can use for software applications, but I believe at the moment it's still kind of limited to specific trial partners. So that's something where usually that's clearly specified in the Help Center pages or in the developers at Google.com site. So I double-checked there. And often, especially for the newer markup types, there's also a contact form that you can submit your site there so that someone from the team can double-check to see if it makes sense to show your site as well as one of these kind of initial people to try things out. What's the role of an age of a domain in ranking? Kindly reply in detail, because people say that no site younger than 365 days can achieve first rank. A new domain can rank number one within a couple of days. It's not something that we have any kind of artificial filter that says, oh, this is a new domain, and therefore it can't show up in the search results. You can make a new website. It can be extremely relevant for some queries, and we'll try to rank that appropriately when it comes to those queries. And that could be ranking number one. So that's not a matter of, like, you need an old domain in order to rank. You need to have really good content. You need to have a fantastic website. You need to have something unique and compelling that people are trying to find. So instead of focusing on the age of a domain name, focus on your website more, and try to figure out what you can provide that's unique and compelling that isn't available on any of the other websites where you know that people are trying to find that. Hello, John. Yes. Is there any limitation about title tag length? Because sometimes some blog posts, title is too long. So when we use that title in the title tag, it's usually more than 70 characters. So will that affect SEO if it is more than 70 characters? The title tag? No. That's perfectly fine. So usually what happens in a case like that is that we won't show the full title in the search results. We'll try to figure out which parts are relevant based on the user's query. But that's totally up to you. And I have another question. When Google bought Crawler website, does it crawl the whole website at a time? Or does it crawl some of the web page at a time? It's always on a per page basis. So some pages will be crawled more often. Some pages maybe only every couple of weeks or every couple of months. Because we have a blog website and we post on the blog on regular basis. But last few weeks, we noted that Google is crawling the home page category page. But the post we have posted on last two weeks, those are not indexed by Google. So that is why I'm asking this question. Is it some issue on the website or is it normal? That can be completely normal. Yeah. OK, thanks, Dr. Schreiber. John, can I ask a question? Sure. So when I find something at moving these blogs, they can be quite a e-commerce site. And they've also got a specific blog on the community that's in the domain. So I want to tighten it up or loosen it up by basically moving the blog to the e-commerce domain. They've got really good content. They have done really well for the blog. Do you think it's even necessary then to move it onto the e-commerce domain? Or should they just leave it where it is? So I think the question is, you have a blog and it's on a separate domain and you're considering combining it with your e-commerce site. Yes. Yeah. So essentially that's up to you. From my point of view, that can work either way. On the one hand, it can kind of keep things separate in the sense that your blog will rank separately from your e-commerce site. On the other hand, it can also make sense to combine things where if your blog is really about your e-commerce site, then maybe it makes sense to put those on the same domain. But I don't think you'd see any significant gains or losses by going one way or the other. So that's something where I would look at it more from a practical point of view, from a marketing point of view, rather than from pure SEO. Yeah. Yeah. I mean, I would rather not add the fact. All right. Let me grab some more of the questions that were submitted here, a bunch of longer ones. So it's kind of hard to pick and choose. We've been running the only site to fact check celebrity reporting for the past eight years. But around a week ago, we began to suffer a significant drop in traffic and rank. We can't figure out why. We're wondering how important page load time is for organic performance. Also, we suspect some large outlets that don't like being called out for inaccurate reporting are linking to us with low quality sites. Is there a way to disavow all of those unwanted links? So I believe the site also posted in the forum something similar that I saw there. So maybe just to cover these two specific items there with regards to page load time. From our point of view, we try to differentiate between sites that are significantly slow and sites that are kind of within the normal range. So when we're looking at things that are really, really slow, it's more a matter of it. It takes several minutes to actually load these pages. And then that's something where algorithms might say, OK, this is really, really slow. We might need to take action on it with regards to how we show it in the search results. But if you're kind of within the reasonable range and you're looking at, I don't know, a couple of seconds or maybe even like a half a minute or something like that, then tweaking that isn't going to have a direct effect on your search results ranking. There might be indirect effects in that users stay on your site longer and they find more content that they like and they recommend that to other people. Those kind of indirect effects will always be there. And there are lots of studies out there that are showing that faster websites tend to perform a lot better with regards to users in general in that conversions will be better, people will view more pages on a website, all of that. Can I follow that up? Can you hear me? Yes. Yes. So, I mean, we're still trying to figure out why this is happening. We get linked by very reputable places, Huffington Post and a half dozen other places linked to us today. We're really the only people who do what we do. But something happened last week and we know that people have deliberately hit us with thousands of things that we're constantly putting in disavows. And someone brought it up earlier also, the DMCA, last week someone hit us with 50 of them. None of them were true. I said to them, you should retract it. They said, we're a third party, we can't. So I then had to put in 50 or 51 counterclaims and they said they obviously wouldn't oppose it. So I'm wondering, you know, we're still trying to figure out what this is that happened that suddenly changed our ranking. So with regard to links, that's something that I looked at there as well. And that's not something that would be affecting your website. So the links wouldn't be a problem there. I don't believe from a DMCA point of view that that would be an issue there as well. At least it wasn't when I double checked that last week. The main things that I see when I see a site kind of dropping in rankings like that or seeing a change in search results is usually just from an overall quality point of view from a website. So kind of essentially like the whole picture when our algorithms look at the website, how can we judge the quality of this website. And that's something that's sometimes really tricky to kind of figure out and to work on ways to improve that across the board. I'm sorry, I didn't want to have a little child hopefully who's not crying in the background. I know you can't offer very specifics, but what are among some of the more general ways that we can improve so that we can get back to where we were? I speak obviously as the editor and the owner but also because we're very committed to this mission of fact-checking. We're the only ones who do this. It is sort of annoying separate from what I do to see the inaccurate stories on top and then obviously it then becomes especially an entertainment and echo chamber where everyone links to the inaccurate story and in a way that gives it better page rank, the most inaccurate story. Yeah, I think that's sometimes really tricky. So one thing that I tend to send people a lot of the link in the chat is an older blog post with regards to the Panda algorithm that we made a while back that has a bunch of questions that you could look at together with maybe people who aren't directly associated with your website. So that's what I would kind of start at and try to take people who are really not associated with your website who can look at it objectively and present them your website with a specific task and present them maybe some others and with similar tasks and go through those questions and see where do they tend to get stuck and think about what you could do on your side to try to address those issues when they come up. And some of these are obviously really hard where if you've worked on your website for years and years and it's like, oh, it's your baby and how dare someone criticize my layout or my ad placement or whatever, but it's really useful to kind of get this objective feedback and to think about what you could do across the board to kind of shift things up a slight level. John, may I ask a question regarding something that you said before on updated content? Sure. Well, in the previous Hangout, I asked you about pinging Google by submitting a request to Hubbub. And well, I have done some experiences with that and at least for content that is updated, I can see there is some bot, not Googlebot, like something like NewsFatcher or something like that, that comes to request the updated page and the RSS feed or sitemap, but then the Googlebot does not always come to pick a page. It usually comes when it is a new page but not an updated page. And sometimes what happens is that I publish a new page, but for some reason I forgot to add something. I add that text like an half an hour later and ping again and Googlebot does not come. Is there a timeout for when I am able to send a new ping for updated content? Is there a rule that you can somehow hint us? I don't know if you can tell us everything you know, but what can you tell us? I am not aware of anything explicit with regards to kind of updating URLs. So I would love to take a look at some of these examples if you have something because that seems like something that for some sites might be more relevant and it might be interesting to see what our systems are actually doing behind the scenes there. So usually what I tell people is to update the last modification date in the sitemap file and to ping us that again. And usually we'll be able to take that into account. If you're talking about publishing something and then a couple of minutes later updating that, then I don't know if our systems will say, oh, I'll wait a day and see if the change date kind of sticks and then re-crawl it. Or if they would go ahead and re-crawl it a couple of minutes later as well. So that's kind of an interesting setup. Yeah, well, one thing that usually works because that doesn't work well for update content is just go on Search Console, fetch as Google and request to submit to index. Then Googlebot always comes up there, but it's a bit bureaucratic. I already have some code to automate that from my publishing system. So I wonder if it would be possible, if not possible just drop a suggestion which would be like an API call in Search Console API so we can somehow ping Google and tell, oh, really forgot to update this. It's not like we are trying to spam Google. And since the API call is authenticated, I think Google recognizes it's not just an ordinary spammer. Yeah, we are looking at things like that. So I think the difference that you highlighted there is really the tricky part on our side in that especially the fetch and submit part of Search Console is one of the features that is almost the most abused of Search Console. Well, people will send all kinds of bots there to try to keep submitting things. From our point of view, that means it must work. It must be a useful feature. But on the other hand, it makes it really hard for us to say, oh, we will invest a lot more time to make this even easier to abuse by providing an API. But I know you're not the only one who's asking for something like that. So maybe there are ways that we can provide an API and still be able to filter out kind of the spammy abuse side in that regard. OK, thanks. Hello, John. Yes. Can I ask a question? Sure. Hi. Actually, I have a website having three versions, the desktop, mobile, and the AMP. What I want, actually, I want to redirect the desktop version to AMP. For example, if a user opens my website, URL from mobile devices, and actually I want you will not come to my mobile site and should redirect it to AMP version. So is it OK to do this? Sure. Sure. You can do that. You can make your AMP page the mobile page for your desktop site. That's totally possible. I know that it's a rare setup where you have a desktop page and an AMP page as a mobile version. Some people have only an AMP page, and they say everything that I do works on AMP, and my AMP page is responsive. I can use that on desktop too. For example, the AMP project.org site is completely on AMP. There is no desktop version anymore. So that's something that's possible. It depends, I think, a bit on your website and on the functionality that you need to provide on your website. So if you have a lot of dynamic elements, if you have a login, if you have kind of database lookups that you can't really cache, then those are things that probably don't work that well on AMP at the moment. But if your website is mostly static content, if it's a blog, if it's a set of documentation pages, informational pages, then that might work really well as an AMP page. And you just only publish an AMP version. You don't even have to worry about the desktop site anymore because an AMP page is essentially a normal web page too. Hello. Yes. Yeah, one more thing. Actually, I want to know, do I need to mention mobile alternate tag on that AP version or that simplicity redirection is fine? If you want to use the AMP page as your mobile version, then you would need to treat that in the same way that you would use separate mobile URLs. So in the documentation, we recommend that you have the rel alternate mobile link to the mobile page and from the mobile version, the canonical back to the desktop page. The canonical to the desktop page is something you have on the AMP page anyway. So essentially, those connections are kind of what we're looking for there. And the redirect helps us as well, of course. All right. Thank you. And one more question. Actually, previously, I asked a question regarding, actually, we are from India, and we are working as a new website. And actually, we want to feature in live blog. And we are still running a live blog, but we are not able to coming on that section. So how can I, I mean, come on that feature? You'd have to use the contact form in the Help Center there. So I don't know if they're picking new sites to kind of try this out with at the moment. Sometimes what happens with new features that they launched, specifically around kind of the rich cards, the more dynamic content, is they will work together with a bunch of people in the beginning to kind of set this up and to see then how that kind of pans out. And we're trying to see how users are responding to this content in search, how publishers are able to implement this content, kind of this markup, that functionality. And depending on that, they will take the next step and say, OK, we will broaden this to a bigger set of testers or maybe we'll just make it public for the whole world to actually use. So that's something where it kind of depends. But if you have the markup on your pages and you've submitted in with that form, then you're kind of lined up for those next steps. All right. So do I need to submit that particular URL on the forum only so that you can consider? No. In the developer side, there's usually a contact form. So let me. No, there is no option to submit. It's a pilot feature right now. OK. Yeah. Then in that case, you need to wait. I'm sorry. Yeah. All right. Thank you. Congrats. Yeah. So sometimes it's really a matter of the team actually have a chance to analyze all of the information they've collected so far and then based on that to make a decision on what the next steps would be. All right. More questions from any of you. I have a bit more time here. So we can kind of drag things out a little bit longer. If there are questions, comments from any of you, happy to take a look. With regards to session ideas, you really got me concerned now. Yeah. I mean, I'm not the best at that. You know, I'm not a developer. So I need to be able to advise these developers on what they should be doing, what is best practice. Is there anything that you can give me some tips on or advice on? So specifically with regards to that URL that you sent me there, when I click on it, I get redirected to this weird session ID URL. And the session ID is separated with a semicolon from the main file name. And you have URL parameters afterwards with the question mark. So that makes it really hard for us to actually figure out what to do there because there's a different URL every time that we try to crawl that page, which means we have a really hard time understanding how we should actually index this page. So that's something where I would recommend moving the session ID outside of the kind of the file name to somewhere behind the question mark so that at least we can understand and remove that session ID when we crawl and index those pages. So it looks like when I double check this page, we are indexing something there. But because we're seeing different redirects every time, it's really hard for us to say this is the content that we should be indexing and this is how we should be indexing it here. That makes a lot of sense because I've been struggling with this issue for almost a year now to get as much page index as possible. And even just admitting my site map, after submitting it for the course here, I've only got 3,000 pages, so I've only got 95 index. So that makes a lot of sense what you're saying. Yeah. That's something that might be playing a role there. I'm pretty sure the whole session ID setup that you have there makes it very, very hard for us. So I know the team tries to work around kind of these untraditional session ID setups, but it's something that makes it a lot harder than it needs to be. And it makes it a lot harder to track things as well on your end because potentially the session ID might be a part of the URL that's actually indexed. And then that URL changes every time we crawl. So that's something I'd recommend trying to clean up. John. In the meantime, would you suggest that I rather not block him in the session IDs and robots.txt, just leave it as is while they fix it? Yeah. I would not block them in robots.txt. OK. Thank you so much. Is that something that can be controlled in URL parameters as well in Webmaster Tools, or are there some things that override those preferences? You can use the URL parameters there, but in this case, the session ID is outside of the URL parameter. It's actually technically a part of the file name. So if you click on the link, it adds at least for me. It adds the session ID with a semicolon from the file name. And then behind that, there's the question mark with the traditional query parameters. And the traditional query parameters are what you can control in Search Console. And what's kind of before that is seen from our side as more like a part of the file name. Take it all may then. No problem. All right. Let me see if I can grab another question from those that were submitted. How does Google rank news websites like the BBC, because they have links from spammy websites all the time? Yeah, a lot of websites have links from spammy websites. And we still try to rank them properly. We're pretty good at dealing with spammy links and ignoring most of them. So this is something that if you look at a lot of the more common and popular websites, that's almost traditional that spammers link to those websites, because we'll see people set up spammy websites and they have a bunch of spammy content on there. And then they link to the BBC and CNN and Google and Wikipedia and they think, oh, search engines will think that my website is high quality because I link to these other high quality websites. But actually search engines have evolved quite a bit over the past decade or so. And they've kind of recognized this tactic and they're good at ignoring that. Let's see. What else do we have there? It's kind of unique. Yeah, I don't know. Anything else from your side that's on your mind that we need to talk about before the weekend? Otherwise, we can take a break here too. That works from my side as well. All right. Then let's take a break here. Thank you all for joining. Thanks for all of the interesting discussions and all of the questions that were submitted. As always, I'll be setting up the next batch of Hangouts probably later today or early next week. They should be ready, I think, in two weeks again. And hopefully I'll see some of you all then again. In the meantime, if there are things on your mind, feel free to tweet us on Twitter or to post something in the Webmaster Help Forum where there are a lot of really smart people that are hanging out as well that can escalate things to us as well when things get kind of weird and complicated. So until then, thanks again and I wish you all a great weekend. Bye, everyone. Bye. Thank you. Bye-bye.