 All right, welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do is Webmaster Hangouts, together with publishers and webmasters from around the world. As always, I'd like to give those of you who are kind of new to these Hangouts as a chance to ask the first questions. Is there anything on your mind that we should start off with? I have a question if nobody minds. Go for it. I'm a YouTube pop contributor at the YouTube forums and for the Brazilian market, for the Brazilian users. And I've been lately thinking about working on developing some content for the market of Brazil on the website. And my question is, since there is so much related content on the web already, what can I use as a reference material to help me make quality content? I mean, what can you share to help me distinguish that if I want to make quality content? Is there information that I can find that can help me sort of guide me to develop good quality content that doesn't duplicate? It doesn't seem like I'm just retyping this information because I'm trying to help the users that with me work on the Brazilian YouTube form for the top contributors program. So it's like more of a Google-focused product, but I don't want to seem like to duplicate them. I think you're totally on the right track. That sounds like a good idea. And it sounds like you have the right mindset there. So I wouldn't worry too much about finding documentation that tells you how to write good content. So it sounds like you're not rewriting content. You're not just creating something for the sake of putting something online. So that sounds like a good idea. And what I would recommend doing there is also making sure that you always have that feedback loop from your users, that you have a chance to hear from them and say, is this the right thing? Am I doing the right kind of content that you all like? Or is this something that you think is terrible or boring? Kind of like you would have on YouTube where you have the comments below the videos, those kind of things. OK, so good. Thank you. All right, more questions from any of you others who are kind of new here. I see a lot of new faces, which is awesome. Yeah, feel free to unmute or jump on in. Otherwise, I'll just run through some of the questions that were submitted so far, and we can see how that goes. Let me just mute some of you, because there seems to be a bit of noise in the background. I'm not sure where that's coming from. If you want to jump in in between with comments or questions, feel free to unmute. All right. So we're starting off with a simple question about mobile indexing, which is probably not so simple after all. We currently use dynamic serving for the mobile theme on the same URLs as desktop, but we're moving to a PWA to replace our mobile theme. Due to technical limitations, we can't use the PWA on the same URLs as the desktop theme, so we'll be launching it on an m.subdomain. Are there any things we need to be careful about? Essentially, the main things to watch out for there are just the same as with any mobile site, so it's not specific to PWA. In particular, we need to have that connection between the desktop pages individually and the mobile pages, so the link with the rel alternate from the desktop and the link with the rel canonical back from the mobile pages. For PWA, this means that we need to be able to crawl and render the mobile pages as well, so I would double-check that with Fetch as Google so that we can actually pick out that link back to the desktop pages. And with that, we should be able to recognize that this is your desktop site, this is your mobile version of that site, and swap out the URLs on demand when users search on mobile devices. The second question kind of goes into the same kind of topic with the mobile-first index. What about the rel canonical and rel alternate tags? This stays the same, essentially like we have it documented where we have the rel alternate tags from the desktop pages to the mobile versions and the mobile version, when we render that page with the PWA, we would probably need to render it to actually see all of the content. We'd need to see the rel canonical link back to the desktop page. And with that, we can understand this connection between this is the desktop page, this is the mobile page, and for mobile-first indexing, we would just index the mobile version of the page. Hey, John. Hi. Hi, it's Max here. You just answered my question. Oh, awesome. If we go back to the first part of the question, where we're moving, we'll be moving from dynamic serving to the PWA on separate URLs, will there be a kind of migration process where, because the traffic for the mobile has been going to a single URL, will we expect to see some kind of fluctuations while the traffic is moving over to our new mobile domain? Probably not so much, especially with the mobile version, since that's not the one that we use for indexing. You probably wouldn't see such strong or visible fluctuations in search because of that change. So what might happen is that it'll take a while for us to understand the connection between the mobile and the desktop pages, so for us to show the mobile URLs in the search results. But the ranking and the indexing should remain the same because we have the desktop version that stays the same for crawling, indexing, and ranking. So that's kind of an easier change than if you change the desktop site or move the desktop site to different URLs. OK, thanks. And just another question with point two and my question with the canonical links and the rel-alternat, will they always stay that way in the future? We won't have to switch the alternate to the desktop and the canonicals to the mobile. I don't know. I can't make any promises for always in the future. I suspect at some point there will be differences. But at least our plan with mobile first indexing is to make it as easy as possible, which means that we kind of try to handle that as much as possible for sites. So we don't want sites to kind of have to change things around because even adding a simple meta tag is sometimes something that takes a lot of time and we understand that. And doing something like changing connections between pages takes even longer because you have to try to get that right. So that's something where we're saying we're going to try to take the existing markup that you have on your pages and just use that for mobile first indexing. OK, that's great. Thanks, John. Sure. All right, off to a question about internal links. If you have two internal links on the same page but with different anchor text and these are pointing to the same destination internal page, would Google use or give preference to the first anchor on the page or distribute the weight equally between the two? How does that work? What can we do to make internal links as efficient as possible? So I guess there are two aspects here. On the one hand, you could probably work out what the internal implementation on Google side is at the moment for something like this. On the other hand, our general recommendation is to just make sites that work well for users. And that's something where we constantly change our internal implementations to try to figure out what do we need to do on our side to understand what a normal website looks like. So if you're creating a normal website and you have multiple internal links going to the same page with different anchor text, then that's something that we should be able to handle fairly well. Without you having to artificially think about which link is the first one that I need to focus on or that I need to tweak, that kind of doesn't really make sense for us. So in principle, when you take a step back, our goal is to be able to understand natural websites. So if you create a natural website, then that should just work on our side. We're redesigning our branch pages. Can the length of a page ever be seen as negative as our site template is A4 with, as opposed to many sites that use the whole screen as with, thus allowing to get more information on a page without having to scroll down or reduce information in tabs? Any kind of question with ever or always is really kind of tricky for me to answer. From my point of view, I don't see this as being a problem at all. So if you create a page that's more longer or one that's kind of like with the condensed information on the full width of a page, that's really totally up to you. And especially on mobile, all of that changes anyway. So that's something where I would focus on usability and making something that works for your users and not worry so much about how Google actually picks up that content. We've seen some sites that have really long pages that perform really well on Google. We've seen other sites that have fairly short pages and more pages that perform really well on Google. That's kind of up to you. That's something where I'd recommend you try things out, see what works well for your users, and focus on that. We know about a site that has scraped our content. That site is a doorway site, and they're spreading malware, and they're bad. We're trying to fill out the DMCA claims against them, but with no luck, it seems that for some search queries, Google shows their results instead of ours. What can we do about that? So in general, the DMCA process is probably the right thing to do there. This is a legal process, and I can't give you legal advice, so I can't tell you if it's exactly the right thing to do there. But you can definitely check in with a lawyer and look at that process and see if that applies here. In general, this is something that you can do. You can submit to both Google and to the website hoster to kind of have them take action on that. The important part there, especially when it comes to Google, is that this is a per page setup. It's not something where you can say, this whole site is scraping mine, but rather you'd have to say, well, this individual page and this individual page is a copy of this individual page or that individual page on my site. And then that would be something that Google's legal team would be able to process and perhaps take action on if that's appropriate. So that's kind of what I would aim for there. There is no kind of like backdoor system on Google's site where you'd be able to contact someone from Google's search team and say, hey, this site is copying mine. I can't get the DMCA to work. Can you just take it out for me anyway? That's not what we would do. Sometimes when I want to go to a site that I like, I type the URL in the Chrome address bar directly and don't use Google. Can you see this direct URL type in and say, hey, this site must be awesome? No. That's generally not something that we would see. And as far as I know, I don't think we would use that for search at all. So that's something where sometimes, depending on the settings that you have in Chrome, it might be that they log this for diagnostics purposes. But as far as I know, that's not something that we would use at all in Chrome. If you think this is a really awesome site, then in general, we try to collect other signals as well that tell us this is a really awesome site, rather than just individual people that happen to type a URL into the Chrome address bar. Especially on mobile, typing things into your address bar is really painful. So I don't think that would be something that would be that useful for us. We recently migrated domains from a.net.au site to a.com.au website. And in the process experience, an extreme drop-off inside traffic. Is there any way to see if our value has been retained? So in general, if you do a site move, which would be something like this, I'd recommend you follow the steps that we have in the Help Center with regards to setting up redirects properly, making sure that everything cleanly redirects on a per URL basis. And in general, that's something that should work out really fairly well. So it doesn't really matter so much from where to where you're moving. In this case, you're moving between two country code specific top-level domains, which is totally fine. This means that geo-targeting will generally be the same. You don't have to change any settings in Search Console. So that's something that, in general, we will be able to forward fairly easily. What might be happening here is that, perhaps, in general, Google has just decided, or rather, its algorithms have decided to reevaluate your website. And that might not be related to your site move at all. It might just be that this kind of reevaluation would have happened regardless of where your website is hosted. So that's one thing that might also be happening there. What you could do to try to figure that out is to see when exactly these changes happened in Search, was it really right when you did the site move or was it somewhere before or after that site move where you started seeing this change in Search? And depending on that, that might be something where you'd be able to say, perhaps there is a technical issue with the site move, which you would probably be able to see, or perhaps it's more something that Google is generally just trying to reevaluate my website overall. And that might be a sign that you should, perhaps, take a step back and think about quality and think about what you could do to significantly improve the quality of your website overall. If you're totally unsure, like which direction this would be headed, I'd recommend going to the Webmaster Help Forum and getting advice from other people there who have done similar moves or who have experienced kind of diagnosing similar kinds of issues across different types of websites. Is it a problem if we 302 all non-mobile users from an AMP page to its desktop canonical? I personally don't like that, but the dev thinks it's cool to do. We're also redirecting Googlebot non-mobile traffic as it's not a mobile user agent. I don't know what would actually happen there. So with regards to what users do, that's kind of up to you. The thing to keep in mind there is that if this AMP page is being shown in the search results with the AMP viewer, then that would be on the cached version of the AMP page. It wouldn't be on your server itself. So that wouldn't be something where you'd be able to do this kind of redirect on your site. On other hand, if people are going to your AMP pages because they found a link there and this is on the AMP page itself on your website, then that's kind of up to you. With regards to Googlebot traffic, I don't actually know for sure which types of Googlebot would be expecting to see the AMP content there to be able to confirm that this is actually an AMP page. It might be that we're also crawling AMP pages with the traditional Googlebot, which isn't really like desktop or mobile specified. And if you're redirecting that Googlebot as well, it might be that we wouldn't be able to pick up those AMP pages properly for search. But that's something you could double check on your site. Maybe just take a handful of URLs and set up this kind of redirect and see how that works out for you. And if these pages continue to be shown as AMP's in search for mobile users, then probably that works out. I suspect the setup where you're redirecting Googlebot as well might be a tricky one. So that's one thing I kind of watch out for. And I kind of assume that this is something that could change over time as well because it's not really a defined state. How does Google treat the acrylics in text on a page or in title tags? We try to treat characters the way that they come. So this is particularly if you have kind of these accent marks on characters or kind of variations of characters that were there multiple versions that are sometimes used. In practice, what happens here is we try to understand which words are synonyms of each other. And we try to do that in an algorithmic way. So it's not that we have any linguistical models built into search that say, oh, this character maps to this one. And therefore, all words that use this character can also be found like this. But rather, we try to recognize which words have synonyms based on what we see people searching for. And based on that, we'll try to pick that up and show that in the search results itself. So what I would do in a case like this, if you have these characters in your language on your pages, I would just write naturally. And just write them the way that they're normally used in your area and in practice, that should just work out. We have a child care business with five locations in Houston. Our website has the Locations page with all five locations listed. Do you recommend having individual pages for each location to rank better? The content would be the same on all five pages. So I think with five individual pages, you could go either way. It might make sense to have individual pages, especially if you have unique information for each individual location. That could be things like your employees in those locations, opening hours, or something that's kind of unique to each of these locations. Those would be the kind of things that I would perhaps list on these type of pages. If you don't want to do that, then probably one page with those five locations listed on that would work just as well. Essentially, this is kind of up to you. Sometimes it means more work to create more pages. Sometimes it means better targeted users going through those pages. That's kind of hard to guess from my side. With regards to the second part of the question, saying that content would be the same on all five pages, if the content is actually the same across these five pages, then I suspect you won't get that much value out of having separate pages there. But again, with five pages or five locations, it's such a small number that from our point of view, we wouldn't see this as a doorway site. We wouldn't see this as being problematic. It's really more up to you what you'd like to put on your website. Our site used to be on the top of Google, but for a year now it continues to go down. It has reached the seventh and eighth page. We didn't do any blackhead SEO. We don't have server issues, nothing unusual. What could be the reason? In practice, this is something that can happen all the time. So algorithms can change. Our evaluation of your pages can change over time. It can be that the rest of the web changes as well. So just because you're not doing anything explicitly wrong doesn't necessarily mean that your website will always be ranking at exactly the same place forever. So this is something where if you're seeing changes like this, I'd recommend kind of taking a step back and thinking about what you could be doing on your site that is significantly better than all of the others that are kind of ranking for these kind of queries. So don't try to be just as good as the others. Don't try to just say, well, my website has been like this for the past 10 years. I'll never change it. It's always been good. But rather think about what you could do to kind of significantly move things forward. As an example, on our sites, we constantly do experimentation to figure out, are we making the right assumptions? Are we still doing the right things? So for instance, if you go into Google Search, you will almost always be in multiple experiments where we're constantly testing things and constantly trying new things out to figure out what we need to change to continue to evolve. So these are things that I would recommend doing rather than just saying, well, my website has been like this. And I haven't changed anything. Therefore, it should continue to be ranking or continue to be as relevant as this. How come a home page still indexed in Google and ranking for numerous queries is not indexed anymore on the first keyword. It was first indexed for on the first page of Google? This is kind of the same thing. Rankings can change. The rest of the web can change. It's normal to see fluctuations and changes over time. How can we see which URLs in Google have been indexed so far? If we submit a sitemap in Search Console, Google will say 95 out of 100 are indexed, but which of those five are not indexed. So there is no way to get a list of the indexed URLs at the moment. What you can do is split your sitemap up into smaller parts and kind of look at it that way. In general, however, I wouldn't focus so much on trying to get a high number of URLs indexed, but rather focus on making sure that that which is shown in Google is actually relevant and useful for users and for yourself. So in practice, when we look at most websites, for example, if we look at the data shown in Search Analytics, you'll see that for most sites, there is only a handful of pages that is actually shown in the search results. So getting more pages indexed doesn't necessarily mean that you'll be more visible in Search or that you'll get more traffic. So I would focus more on making sure that what you are providing on your site is relevant to users. And in turn, we'll try to recognize that and index more of those kind of pages. I got a Search Console security warning for one of our sites. Without further explanation, how can I find out what the problem might be, I would recommend going to the Webmaster Help forums for something like that. The folks there are extremely good at helping with security issues and trying to kind of analyze what might be going wrong, what you could be doing a bit differently. Why is the organic session count in Google Analytics app indexing data so drastically different from the clicks in Search Console data? I don't know. So specifically, I don't know exactly what you're looking at there with regards to the app indexing data. In practice, Google Analytics tracks things quite differently than Search Console. So Search Console will track kind of the traffic from someone searching to clicking to your website. And Analytics tracks the other half. So kind of the part when someone is on your site, or in this case, within your app and from there. I don't know what specifically you're looking at with regards to app indexing and Analytics because there are different ways of setting that up and of tracking kind of the user data within Analytics for apps specifically. There are some questions about people trying to jump in the Hangout. So for what it's worth, there are 10 slots that are available in the Hangout. And when they're taken up, you can't really get in unless someone leaves again. So that might be one of the problems there. The link to the Hangouts, I usually post in the thread for that Hangout in Google+. We know that Google recommends responsiveness, but what's the take on dynamic serving if the URL stays the same? So essentially, I guess the question of responsive design or dynamic serving, which one of these is better. So responsive design means that, OK, I'm on my back. All right. Wow. Glad that came back. OK. So dynamic serving, responsive design, again. So with dynamic serving, the tricky part is that since you serve different content, it might be that Googlebot on mobile doesn't see the full content. So when we shift to mobile indexing, that could mean that we're seeing slightly different content or simplified content compared to what you have on your desktop site, which could mean that we have less to work on for indexing. So in particular, things like meta tags or images or videos on pages or even the textual content on the page, that might be something that is perhaps not as optimal on your dynamic version if you're not watching out. So this is definitely something you can do really well with dynamic serving. There are lots of sites that use dynamic serving and do it in a really fantastic way, but definitely something worth watching out for. With responsive design, you don't have that as much of a problem because the HTML is the same. So all of the links, the images, the metadata, all of that is already on those pages. So with dynamic serving, you just need to make sure that you're really providing the full content as well. I live in Norway. I write in Norwegian. When you do changes in the algorithm, when are those changes rolled out to other languages, other than English? Is that instantly, or is there a waiting period? So I am in Switzerland. We have the same worries. In general, when we make algorithms for search, we try to make them such that they're valid across the board, across all types of websites, regardless of the language, regardless of the font that's used, the characters that you use to write. We try to make those changes as universal as possible. That's essentially the best way to work with the whole web, because the web is very diverse. There are lots of different languages out there. There's lots of content in non-English out there. And we need to make sure that any search-related change that we make across this vast collection of content is valid and useful across the board. So as much as possible, we try to make sure that these changes are rolled out to all languages all the time. There is one kind of traditional exception there, which comes with regards to things like rich snippets, structured data, the knowledge graph information, kind of the business-related information that we show in search, things like the connections, also with streaming services, where we maybe show a link to a streaming service that provides the content that you're looking for. Those kind of things are sometimes a bit trickier for policy or for legal reasons that we can't roll them out in all countries all the time. So sometimes there are issues there with regards to specific search features that just take a bit of time to roll out everywhere. And often that means we roll them out in the US first. Sometimes we roll things out in other languages first as well. But these are the kind of things that I don't see all the time in Switzerland. You probably don't see all the time in Norway as well. When you look at our blogs specifically around structured data, there are probably things there that say, well, we're rolling this out in the US now. And depending on how that works, we'll work on getting it to all other countries as well. Why do error reports take so long to update? Good question. So I assume you mean search console, like the crawl errors, those kind of reports. The tricky part there is we can't crawl the whole website all the time. So we have to take an incremental approach and crawl things step by step, which means we will see some errors fairly quickly. And some errors will take a bit longer to see. And similarly, we'll be able to kind of resolve some errors fairly quickly because we've reprawled them and seen that the error is gone. And for other errors, it just takes a lot of time for them to update. So this is something that crawling and indexing can go anywhere from a couple of seconds or minutes even to a couple of days to a couple of months. It could be even up to a year for us to kind of re-crawl and reprocess individual URLs. So if you have a site-wide crawl error or some other kind of issue across your whole website, then you'll generally see that the more commonly crawl pages kind of get cleaned up fairly quickly when we re-crawl them and see the error is gone. And the rest of your website will take quite some time for us to actually re-crawl and reprocess all of that. So it's not so much that Search Console is saying, oh, I am too bored to actually look at your website and look at your errors. It's more that we don't want to overload your server. And therefore, for technical reasons, we can't re-crawl everything immediately. So it's just going to take a bit of time. In practice, what also happens, a slight effect there, is when we see bigger site-wide changes across a website, we will try to re-crawl that a little bit faster than we normally would, just to make sure that we're able to kind of catch up with the important pages a little bit faster. I'm SEO manager in one of the first three agencies in the MENA region. Let's see. I'm faced with an issue that I haven't seen before. Have you seen a website that ranks on almost all regions generic and long-tail keywords except for the targeted region? For example, SEO services Dubai is ranked second on .com.sa and six on .com while it's ranked on page 16 in our target area. What can we do? So it's kind of hard to say what specifically you're doing wrong there. I don't know which website this is. But in general, we don't have algorithms that try to kind of hide a website in its target area. So that's not something that we would be doing there. It might be that we're having trouble understanding the geo-targeting of this website. So that's something that might be happening there. I don't know if this is the case, but it might be that there are things like links that could be causing a problem here, too, with regards to especially when you're looking at an SEO site that maybe you've been building a lot of kind of weird links that are hard for us to understand that they actually belong to your region, that could be playing a kind of rule there as well. But this is something where I almost recommend posting in the Webmaster Help Forum to try to get some more advice there. My feeling is that this is not something where our algorithms are doing something explicitly against your website, but rather they're just like confusing signals probably for your website that make it hard for us to understand what it is that you're actually trying to target. I'm trying to replicate an experiment here. So he links to a page about AJAX and SEO, there's Google Index AJAX content, and I think it goes on in that I insert links with JavaScript into a page, and I can't see that Googlebot is actually crawling and indexing those link pages. Will that be a problem? So the tricky part here is kind of that this is not really something unique to JavaScript. If you set up a random test site and you have links in there, then in general our systems are going to recognize that this is some random site, then it's not worthwhile to crawl everything that we see there. So that's probably something where it's less a matter of the technology that you're using to kind of test this, but indexing anything that we find there either. So that's probably what you're running into there. In practice, if you're creating a website with a JavaScript framework, if you're using React or Angular or anything like that, and you use normal A elements for links between your pages and those links point to actual URLs that we can crawl and index or render at least, then that should work out just fine. There are a bunch of sites that are set up like that, and they work fairly well in search. So that's not something where you'd need to kind of artificially set things up. One thing we do recommend is to use something called Isomorphic JavaScript or on Angular, it's Universal Angular, I think, where essentially you pre-render the first page that any user sees and you provide the raw HTML for that user. That has the effect that for users, these pages load really, really quickly because they don't have to do all of this JavaScript to actually build up the page. And for search engines and other kind of browsers or user agents that access website content, such as social media sites, they see the full HTML as well. And essentially, they don't have to do any of the JavaScript processing either. So if you use something like that, then you kind of have the best of both worlds. You have the fast development times from a modern web framework, and you still don't have to worry about anything related to search because you're essentially providing a static page to search engines. I have a client that uses a tracking tool to see how they're ranking for specific keywords. Search Console doesn't give us this information. Are we violating Google Webmaster terms of services on automated queries? Yes, probably. So a lot of these search results tracking tools, they scrape our search results against our terms of service, against our robots text. And that's something that would be, in general, violating our terms of service. So that's not something that we would recommend doing. It doesn't matter so much which tool you're using, but if you're scraping our search results, then that's not really something that we like to see. On the one hand, we explicitly have robots text files set up for these kind of things. On the other hand, this is a big load on our servers. We see a gigantic load from various tools that are scraping us on our servers, which makes it a lot harder for us to kind of provide that level of service that people actually expect from Google. And finally, the other thing to keep in mind is that if nobody is searching for these queries, then it's probably not so useful to track the ranking for those queries, because that has kind of no direct value there. So if people are searching for those queries, then Search Console will be showing that as clicks and impressions in Search Analytics. If nobody is searching for those queries, then it doesn't really make sense to track the ranking in those queries. It's like you're ranking for this query that nobody is actually searching for. That doesn't provide any value for your website. That doesn't provide any conversions. That doesn't bring you any customers. We noticed an increase in 500 server errors since March and can view the actual URLs listed in Search Console. They're mostly very old links. And it looks like the number is increasing further. Is there something like the most common reason for an increase in 500 errors? What's your recommendation for those? So 500 server errors are errors that we see from your server directly, which tell us that for some reason or another, this URL is causing an issue on your server. For the most part, that doesn't matter so much if this URL is not meant to be indexed, then an error is fine. On the other hand, our systems try to recognize an increase in 500 errors and try to assume that this increase is due to what they're doing, perhaps. So our crawling systems in particular will recognize a rise in server errors and assume that perhaps we're crawling too hard and will step back with the crawling. So that's something that I would watch out for there. If you're really seeing a change in the number of 500 errors on your site, then that would be something I try to take action on. So what I would recommend doing there is looking at the URLs that are listed in Search Console. And even if these are older URLs that are no longer valid, I'd make sure that your server returns a clear 404 or 410 result code telling us that this URL is invalid rather than returning a server error, which tells us maybe this is valid, maybe it isn't. I can't tell you because I can't process the rest of your request. So trying to have those URLs returned 404 or something similar would be my recommendation there. All right, looks like we have like 10 minutes left, but still a bunch of questions. Let me just pick a handful here to kind of go through. We've been hit with a penalty for spamming structured data. We tried to fix the problem. We looked at how other sites are implementing this, and they seem to be going even doing even worse. So what I would recommend doing there is posting in the Webmaster Help Forum. There are a bunch of people there that have experience with regards to policies around structured data that can give you a bit of advice. In particular, I wouldn't look at how other people are doing this and saying, well, if they're getting away with this, then I can get away with this, but rather think about what you need to do to actually comply with our policies. In practice, what is also the case is that the structured data team, when they take action on a site, it just affects the rich snippets, which are similar. It doesn't change the ranking of a site. So this is something you can probably work on at your own pace. You don't really have to kind of rush through because the rest of your site is still being shown in Search Normally. So, John, this is my question. Oh, awesome. Yeah, so we've seen quite a dip in search traffic, actually. So it's been about 10% dip because we've lost our rich snippets, which, obviously, we've tried to go through reconsideration. And the first one, I think it took two days. The next one, it took a week. And now we're on to our third reconsideration request. And it's taken almost a month. Is there some sort of timeline for this? Sometimes it takes a bit longer, especially if things are going back and forth, then that can take a bit of time. Let me just double check. So it also looks like you might have had some issues with hacking. Is that possible? Yeah, so we had a hacking problem, I think, on our HTTP site, where we also got the spam instruction data penalty. But we saw it affect our wider site, which is our main site is HTTPS. So we're just trying to fix it across both of them. Let me just double check. Maybe we can take a quick look at this afterwards. Yeah, that would be a great idea. Yeah, maybe after we end the broadcast, I can take a quick look at this. But in general, this is something where usually the folks in the forum are really good at recognizing these type of issues. And they can guide you and say, hey, you're using reviews in a way that wouldn't be compliant with the policy guidelines. So I can double check what's kind of happening there. Maybe I can give you a bit more information. I don't know. But for the more general situation, if you're in something similar to this case, I'd recommend checking out the forum. Let me just double check the other questions that are still here. There's a quick question that I think we had briefly touched upon before we started. Can you confirm that these two things are not recommended by Google, building backlinks from articles and video submission sites? Yes, that's correct. That's not recommended. That sounds like a natural link building. And two, paying for Google Ads doesn't help improve your search rankings. That's also correct. That ads on Google, they don't affect the natural rankings of a site. Still a bunch of questions. So maybe I can just switch over to live questions for the moment. And if your question is one of the ones that is still listed here that I haven't gotten to so far, feel free to copy it to one of the next hangouts. I'll be setting those up probably later today. Or, alternately, if you'd like to get an answer a bit faster, feel free to jump into the Webmaster Help forum for some advice there. All right. What else is on your mind? John, can I ask a question regarding the infamous sandbox? Over time, Google has been saying there is no such thing as sandbox. And on my part, trust me, I believe you. But I've been participating in many forums. I always see people saying, Google is lying. They are not telling us everything that really happens because they posted pages. And they take a long time to get ranked on the search results. But on the other hand, it's also not consistent. Because sometimes they see that some pages get ranked. And basically, what many people do also, those that do not believe, they start using Black Hat techniques to build links and such to rank their pages faster. And to a certain extent, it works, they say. And so my question would be, what can you tell to these people so they start believing Google and eventually stop resorting to Black Hat techniques and get their pages ranking faster? What they could be doing wrong so they don't get the pages ranked as fast as they will? That's a tricky question. It's kind of like, these people don't believe you. You should tell them that they should believe you. I don't know what to say. So if you don't want to believe our advice, then it's, yeah, I don't know. But I mean, it's completely normal that for some areas, it's not that trivial to get a new page into the search results. And that's kind of the normal way that things work. It doesn't mean that there is a sandbox or that there is anything crazy happening there. It doesn't mean you have to resort to Black Hat techniques to kind of get things in there. Chances are those Black Hat techniques will cause more problems for your site, then they'll actually provide value. So from my point of view, this isn't something that is black and white, where we'd say any new page that shows up will always be shown on the first page in search results for Google because that wouldn't make sense. If we have a lot of really good content available already, then of course, new pages are going to take some time to kind of build up the value that is comparable to the other sites that are out there. So there's no magic trick where I'd say you can get any page to rank on page one in Google for any keyword that you want. OK. Sometimes I go, you mentioned, just to finish this question, sometimes I go, you mentioned that for an existing site, when it publishes a new page, Google will somehow evaluate that the site has been publishing consistently good content and that eventually would benefit the site to be the new pages of that site to be taken more seriously or with more relevance. Is that a correct interpretation? Yeah, yeah. To some extent, that happens. When we see something completely new on a website and we don't really know how to judge that, we might kind of defer our decision until later when we have a bit more information. And in the meantime, take some of the signals that we do know about from this website into account. So that can definitely happen. Great, thanks. OK, good morning, John. Hi. Hi, how are you? And regarding to the question that my website is ranking almost all of the targeted countries except the targeted countries, the whole site is already CCTLDs, which is my country code. So there's nothing to manage when it comes to the target. But I'm not sure. I think it's kind of algorithmatic penalty. So I'm trying to figure out what is the exact problem that to avoid. The website is chain reaction to TAE. OK, we are facing this since almost two years. And I'm trying to fix it since a lot of time. I have joined many live sessions or even the Webmaster Center. Can you type the URL in the chat, maybe? Yes. I can't promise that I have an absolute answer for that. It sounds like something that's probably a bit trickier to resolve. So probably the best approach would be to kind of look into the Webmaster Help forums and try to get advice from other people that are in similar situations. And another aspect might also be that in the target market, maybe the competition for those keywords is just much harder than in other areas. So that's something where you would traditionally see differences in rankings as well. But I'm happy to take a look if you could post the URL in the chat. I have just posted to thank you very much. In the Hangout Chat or which chat? No, the Google Chat. Oh, OK. So that would probably be on my personal profile. So I have to kind of check that out separately afterwards. Yeah. OK. Sure. All right. Let's take a break here. I'll stop the broadcast, and we can take a look at the other sites afterwards as well. All right. Thank you all for joining in. I wish you all a great weekend. And maybe I'll see you all in one of the future Hangouts again. Thanks, everyone. Thanks, John. Bye-bye.