 All right, welcome everyone to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst at Google in Switzerland, and we have some awesome guests here today. So awesome to have people join live. We had a bit of technical problems getting started, but restarting seems to have fixed it. Who would have thought? I don't know. OK, so would you like to introduce yourselves? Hey, guys, this is Alexandra. I'm head of content marketing and PR in an agency, a local agency in Romania. Can I share it? Verify? Small promotional moment. Just joking. And yeah, I'm glad to be here actually being in the Google Hangouts. I'm more in the chat, not on the video side. Raise the face. I work with Mihai, by the way. Yeah, and he's usually the face in the hangout. I'll pass it to you. Hey, guys. I'm Mihai Pergis. I'm a founder of the agency. As Alexandra mentioned, Verify, an SEO agency in Bucharest, Romania. And I'm also part of the top contributors program in the Webmaster forums. And I've been joining these Hangouts for the past two or three years, I think, John. Something like that. So it's really awesome to finally be on the other side of the table, so to speak. Thanks for having us. Hello, everyone. I'm Islam Bakar. I work at Property Finder. It's a real estate portal in Middle East and North Africa. Cool. Oh, I'm Rado, and I'm joining from Vienna. They are behind the scenes. Also, for me, the first time on the other side of the screen. Cool. Awesome to have some people here in person. That always makes it a little bit more interesting, I think. I don't know, to get started, do any of you want to start out with a question or comment or anything? OK. Just take that. I'll start. So like I mentioned, I'm on the content side, and I've always wanted to know what does Google take into account when deciding to implement content features in different languages? I'm referring to people who also ask features snippets. Basically, things that pull content from different websites make it easier for people to find their answer. How we do that in different languages, that's tricky. Do you need to understand maybe query intent, or do you need to understand the language better? Do you have to have enough content in that language? Is it a mix? I think all of that comes together. So usually what happens is we try to find one place to start. And sometimes that depends on the kind of feature that we have in the kind of special situation in a special region. Sometimes it just depends on where the team is located. So a lot of times things start in the US, and then it's available in English because the team all speaks English, and they can test it very easily. And based on that, they can expand step by step. For that, they need to be sure that we can recognize the queries properly, that we can recognize the content properly. And also that from a policy point of view, there's nothing problematic with launching it in other countries. So sometimes there are features where we say, we can try this out in English, but maybe in French, they have special sensitivities around whatever they're working on, so we will be a bit slower in Europe or be a bit slower in France before we try it out there. Or we might have to make some changes. Another time, it's also based on a local region where we know that something special is here and we could be able to do something maybe to help them. That's something that I've seen every now and then with India, for example, where we know there are lots and lots of users in India, but they have a very unique kind of web ecosystem. They interact with the web in a unique way. So it makes sense to have some features that are kind of special for them. Right. But regarding, for example, what Alexander mentioned, people also ask, we kind of use that to understand what kind of information people are searching for, other than just looking at keywords, what actual questions people ask, so we know what kind of content we should have that is as relevant as possible for our users. We know that, for example, that's not something that we have in Romania. That's something that there is in English and helps us a lot to better understand the audience. So is it like a manual thing? Or is it just an algorithmic thing? Many things have to come together until there's an automatic decision? Usually, it's based on the data that we have, but it's done manually. So for a lot of these features where we say we have trust in our algorithms now, we have enough trust that we can figure out the Romanian language properly, and then we can do that a little bit more. But that's something that the team usually, I mean, it depends a bit on the feature, but for a lot of things, the team will discuss that and say, we're good now or we're not good now. Or maybe we need to talk with the publishers and say, hey, there's a special markup that you need to do, or we need to encourage the ecosystem to go more in that direction. All right. The simplest way would just, I mean, the simplest feature snippet is just a paragraph. You don't need any, you mostly need to understand the language rather than structure data. Yeah, yeah, yeah. I mean, it depends on the feature. Do you have people who ask in German? I don't know. Okay. I don't know. Well, when you... I don't know. It's... So, yesterday we just had a German hangout and they asked me about some of their features in mobile search results and like that. I've never seen this before. So I don't know if they have something different than what I don't see. You're not much of a Google user. That's always awkward when they ask me, like, how do you optimize for this feature? And I'm like, I've never seen that before. Did they ask about images because we've seen on some kind of articles that images are being pulled from the articles. Yeah. Yeah, maybe that's a new feature. It's had like a weird one. On the English side, yes. Where you have an article and you have a... Beside it, yeah. And images, yeah. Like someone has like a carousel of images. Yeah, you can still fill the headings. You also have some small sighting things under the article. Yeah. I saw, like someone mentioned that on Twitter, I think a while back and Barry was like, oh, we've had this for a long time. Yeah. Okay. Okay. Barry's question actually knows everything. Yeah. Thanks. I have a question regarding the mobile first index. One of our website, we noticed some that it's already switched, but from the log file analysis, I noticed that now the index is now scrolling like maybe 90% mobile, 10% desktop. But we didn't receive any notification that we already switched to mobile first index. So, and we have mobile sites. It's not responsive. And after two days from this change, we noticed some ranking fluctuation. It's like the order, like sometimes it's become number three, one, two, it's not stable. Like every day changing. Sometimes at one day, it changes to three times the position of the keyword ranking. Even if you search from multiple location, it's different. So at this stage, is it normal to see some keyword ranking fluctuation? It shouldn't be tied to the mobile first index. So, I mean, that can always happen. Usually what happens with mobile first indexing is we try to recrawl the site as quickly as possible on the mobile version, so that we have the whole index for this website updated. And during that time, maybe you would see some fluctuations, but usually that's very limited in the number of days. Yeah, it's limited for a number of keywords. It's not like for the whole site, it's just maybe 10, 12 keyword. And only in two countries, the two countries that we notice, it's already switched. Okay. It's not for all other countries, it's just two countries. Yeah, I don't know. I mean, some fluctuations would be normal, especially if you have a slightly different mobile site, then our algorithms will probably look at it and say mostly it's the same and we can switch it over, but if there are slight differences with things like internal navigation, that might have a small effect. Something if you are searching because the website, it's like country parameter location. So if you are searching from outside the country, it's the ranking theme, it doesn't change, but if it's number one, it's number one, but if you are searching or you change the setting with Google to, let's say UEE or Saudi, you can see the ranking change, but if you are searching from here from here, you can see the old tanking, it's the same, nothing changed. Any other countries, if you're using BBN is the same, except the countries that were everything. That makes it hard, yeah. I don't know. I don't know. I'm happy to take a look to see if there's anything weird happening, but I suspect that's true. Searching from here, oh man, little land. Might be that the other guys are doing something that... No, nothing, nothing changed, it's exactly the same. It's exactly the same. So it's like the geo-targeting is slightly different? Yeah, and sometimes the ranking change in desktop, but not in mobile, and sometimes it's reversed. It's mobile, not the desktop. It just, since it was switched, that happened. Yeah. But we didn't receive any notification from Search Console that we already changed, but we also noticed a spike in crawl section in Search Console that uses spike. Yeah, yeah. Okay, yeah. I think the messages go out a little bit later, so that might be kind of normal, but the ranking side sounds weird. That's, I mean, it could be something like just some data centers are updated and some data centers are not, but I don't know, happy to take a look. And when you check the cache for the desktop, you can see the mobile site cache, not the desktop. So the cache version is already www, but it's for the mobile site. Yeah, okay. Not all of them, some of them, and the others for the desktop, it's 404. Yeah. Now, that sounds like some of them have switched over to mobile first indexing, maybe not everything yet, yeah. It's in process. Yeah, yeah. Can I follow up with like still on mobile first? Sure. What will happen with those sites that will still keep like 80, 90% desktop traffic, like design related, development related, they will still use desktop for that? After switching to mobile first, it's gonna be like tough luck for those? Or... So we have, we're trying to do it slowly. So not to force sites to move over. So a lot of sites are shifting over at the moment. We have a lot more that are lined up. And it's something where we want to shift them over when they're ready, and there will be some sites that are not ready. So... But everything will move, that's important. I assume like over years, something, something, like we will want to shift everything over so that it's in like a, I don't know, a stable state where everyone is in the same situation. Is it right now a trigger that if you have like a lot of mobile traffic, you want to get switched faster than if you haven't reached that level? Yeah, it's not based on the traffic. It's really if the website is ready and if the website is ready, we will shift it over. And I saw the discussion yesterday, like you also said that it's not based on if you're optimized for mobile and not, it's the same. Exactly, yeah. But to some sides, it's already identical. But what I noticed is that the one already rolled out, it's depends on the traffic. Like I have to website, it's already rolled out, but it's like the lowest number of session or clicks from Search Console. And have exactly the same setup like others, but it doesn't switch. Yeah, maybe it's just a matter of time. Like we try to do it step by step so that we don't switch everything over at the same time. So maybe it's like, this was this one now and the next one in a couple of weeks or something like that. So, yeah. I mean, it's an interesting situation when you have exactly the same websites and why some are switched over, some are not. Before we go to that, can I follow up with another technical question? Sure. Not related to mobile first. Just have it on tape, please. So when you do a side migration and you change the domain name, is it a good stage to also improve on the URI? Or Google, if you only change the domain, it's gonna understand everything faster and better than like a full URL change? Yeah, I would try to keep things the same if you're just doing a domain move and do the other things step by step afterwards. So do the main migration and then do a full URL change, which is not... When things are settled down a little bit. So do it in like two stages now, all at once. Because if we can recognize that it's a clean domain move, we can transfer all of the signals that we have to the new domain. That works a lot faster and it's a lot less fluctuations. And then later, if you make bigger changes on the rest of your website, then that's something that happens within your domain. So basically like two totally separate things. It's like also like changing a few full URLs on a static domain. And for that, it should be done like creeped bandage all for like down on small stages. Which part? I'd change the URI, I can only change the... I don't know, it's always scary. It's scary. It's always scary. I would try to do it in stages. In stages. Yeah. I mean, sometimes that's not easily possible, but if you can do it in stages, I think that... And stages would be based on page type better, like whatever you want to put up. Yeah, just like step by step, so that you can also monitor the change and see, is this change working? Is it not working? If it's not working, what did I miss? So that you can try it out like that. But generally it's something that you would want to do once and just give it in a lifetime. Yeah. Yeah, yeah. Be happy with the URL within your URL. I would try, especially those internal changes, I would try to create a structure that you can use for a long time. Because every time you change the URL structure within a website, those fluctuations are always really painful. It takes a lot of time to set up. Yeah. But for us, before we do a URL change, we change the URL from English to Arabic for the Arabic version of the website. But nothing happened, like no drop in traffic, no fluctuation in the ranking, everything was just good. And we do it for millions of pages. Okay. So people should go to you for a while. Yeah. It's quite a case study. You should definitely go to the case study. Yeah. Yeah. Case studies are always cool. All right. Maybe a chance for some of you who are here live in a hangout. What else is on your mind? Hi, everyone. It's good to actually see Mihai on that side of the table. I've actually seen him most of the time with the hangout. So good to see you, mate. John, I just had a quick question to ask. We are actually undertaking a massive redesign project at the moment. And we are focusing on accessibility for disabled people. So we intend to keep some of the page headings for the purpose of the screen readers only because the visual content is going to be self-evident for the able-bodied customers. Now, we are also trying to use semantic web best practices, like having proper sections, proper headings, proper asides, articles, so that they are HTML5 and they also validate when we use these headings. They're also W3C validated. Now, what is the best way to actually hide those headings from your perspective so that we don't get penalized for actually doing something right because we want to show those headings so that the screen readers can actually read it out for the non-able-bodied customers of our websites. You don't need to do anything special. So there's no penalty for having text like this hidden. That's perfectly fine. So I was just reading the hidden text guidelines. So obviously we won't be using white text on white background or using positioning off the screen or anything like that. So as far as we are within the realms of what's been mentioned in the webmaster guidelines, we should be fine. Yeah, I think that's perfectly fine. Also, we look at things like the intent. When we manually review these websites from a web standpoint of view to see is there an intent here to deceive search engines. And if it's essentially the same text as you have elsewhere on the page or that's describing an image or describing a section of the page, then there's no bad intent here. So that's not something that anyone from the web spam team would complain about. And from an algorithmic point of view, if it's essentially the same thing as you otherwise have on those pages, then that's perfectly fine too. I'm actually more intending towards the ARIA labels. That's what I'm actually trying to. So our accessibility expert came and said we actually need, we don't need to show this heading, but we need to hide it for the customers that have got a little bit, let's say, not the cautious, how would actually Google perceive that? Because, A, we are trying to do the right thing for our customers because most of our customers are around 80 plus as well. And then a lot of them use screen readers. So just want to make sure that I'm not in those gray waters, as I would say. Yeah, that's a good track. I think that's cool. Thank you. Thank you, John. And just following up on the mobile-first index as well. Now, as I'm talking about redesigning, we are actually one of the largest telco in Australia here. And it's a massive, massive website. We're just starting to do a redesign of our, let's say, the top 20 pages of the website. So how does one prepare for mobile-first? What's the percentage of the website Google might say, OK, these guys are almost 50% mobile-ready or 80% ready before we get switched to mobile index? Is there a threshold that you look for, John? No, we don't have a fixed threshold that's easily comparable. So we do take into account a number of things, like the content on the page, the structured data that you have on the pages, the images, the videos, internal linking. I think we also take into account. And all of these things kind of add up for us. And we try to look at it on a per-site basis. So we can say, this site is pretty much ready. And obviously, no site is perfect. So especially if you have something more than a one-page website, there will always be parts that are better for mobile and parts that might be a little bit worse for mobile. So we try to find the right threshold here. So that we don't cause any problems. But we're looking for a natural change, a natural switch. Yeah, yeah. So we essentially need to be sure that the mobile version, when we index it with mobile, is equivalent to the desktop version so that more or less we can shift these over without any problems. Gotcha. Thank you so much, John. Thank you, everyone. John, regarding the ARIA labels, is this something that Google recognizes, generally speaking, content that's specifically made for screen readers or disabled people? I don't think we use it explicitly at the moment in search. There's always talk about using something like accessibility as a ranking factor. But I believe so far that's not something that we've done. I don't understand. Do you see it like, oh, this is that, or is it just like any other content? We mostly see it as normal content on the page. What you said, John, you mentioned the manual web span team does look at web pages. And if they see a pattern, and if they find something, they do pick it up. So would you be able to expand on that? Like, how often did that happen? Just want to make sure that Google understands that we are trying to do the right thing here. Yeah, so usually from a manual point of view, we try to take action when we realize that we can't solve it algorithmically, when something is really causing a problem and it's affecting our search results. And that's someplace where the manual web span team might step in and say, we need to take action here, which could be to demote the website in search, to kind of neutralize specific elements on a page. It could be in an extreme case to remove a page from search. But these are things where people manually look at these pages and all of the actions from the manual web span team, they get reviewed by someone else on the team. So it's not that one person can come in and say, I'm confused about this website and remove it from the internet. It's really something which we try to do fairly rarely and usually in really kind of the extreme cases. So if you have a website that just has some accessibility features that has hidden text because of that, that wouldn't be a problem at all. Thank you. Thank you so much. Hello. Hi, guys. Hello, John. Hello, everyone. This is Vasiles. I'm joining from Greece. I don't know if you can hear me well. Yes. OK. All right. So I have posted a question on your page, Google Plus page, a while back, in regards to single page applications and client-side rendering websites. I don't know if you've seen it. I could repeat if you'd like. OK, that sounds good. OK, great. So we have built an e-commerce website that it's built with Ruby on Rails on the back end and it uses React.js for the front end and it's shared by Node.js application. So we currently experience a major issue because we have shipped one project about a year ago and it's been successfully crawled by Google and we can see it on the Search Console. So we can see the website and how it looks from the screenshots, from the old Search Console, actually. And now that we are ready to ship another project that it's actually migrating from a Ruby on Rails with HTML templates to the React framework, the new one. And we have serious issues of indexing it and crawling it and rendering it from the Search Console. I've come across your Google IOTalk thing it was four months ago, where you mentioned something about dynamic rendering and puppeteer as an alternative for using, in order for Google to index the website. So I just wanted to ask that if we actually serve static HTML files that are identical to the actual page that the user sees, and we do it instantly when maybe a product is updated because it's an e-commerce website, or a page is updated, we just pre-render and generate a new static file and serve it to the good bot or to open graph or whatever else, is this a good practice or we'll get penalized or blacklisted? That's my first question. That's a good idea. If you can render it on your side, especially for Google, maybe for other search engines and social media sites as well, then you don't have to worry about us trying to render those pages for you. So you can control that yourself. That makes it a little bit easier. Especially with a larger e-commerce site, one of the things to keep in mind is if we have to render the pages, then it's always a little bit offset from a time point of view by the time we actually do that. So if you add new products to your shop, for example, then it might be a week, maybe two weeks before we actually render the page and can add that page to the index. Whereas if you serve a static HTML, then we can pick that up and put that in the index within minutes. OK. Sounds like you're on the right track. OK, OK, that's good. So the fact that the other website is now being rendered from the Search Console and the new one is not. It's because just the Googlebot just needs to take some time in order to crawl and re-crawl and re-crawl to render it, let's say. I mean, the actual JavaScript with the Ajax calls that it has in the background. Maybe, maybe. I mean, if this is something that you launched in the last couple of days, maybe that could be happening. It might also be that the new frameworks that you have don't work that well for us. So if we can't render them, then we can't index them. One thing there is that Googlebot uses Chrome 41, which is kind of old now. That's one limitation. So if you use ES6 functions within your app and you serve those to Googlebot, then Googlebot wouldn't be able to process those. OK, all right. So if you render it, then you don't have to worry about Googlebot trying to figure out the JavaScript. Yeah, of course. That's what we actually found out, and that's what we actually do, because it's the only solution at the moment. Do you have plans in the future to render JavaScript as single-page applications? I mean, easier. We do have plans to improve that. And I would expect that to happen, but it's hard. It's really hard to do this at scale. So for individual pages, that's no problem. But if you have millions of pages that you have to render every day, then that's something that will probably still take a bit of time until we can do it at the same speed as the normal problem. OK, thank you very much. That was my question. Thank you, everyone. Hi, John. It's Chrissy Acklick with AndroidHeadlines.com. How are you doing? Hi, well. Last week, you had answered a question for me, and I left a comment on this week's Hangout. Not sure if you had a chance to look it over or not. It was quite a lengthy one. And I probably don't want to repeat it in total, but I had gone over some after you had answered our question last week. I'm not sure if you had a chance to look at it or not. I think we looked at the site. OK. My question is just why exactly we were dropped in rankings when you had said there was nothing wrong with it? We didn't feel like there was no reason for the drop in the organic search results, according to what your answer was. And we felt like we had earned our ranking or more. We've been audited by a respected SEO, you know, and major tech news site SEO forensic audit devs. And even they're baffled by the rankings drops and no signals to justify it. We keep improving our product every day. Just like Google, we hold meetings, creative, original content, site improvements, everything you can kind of do. And, you know, when you had talked, you know, gave us a few hints on stuff. And kind of that was my response to you this week is that, you know, we're already doing a lot of that stuff and, you know, in regards to original content and so on. There's only so much, you know, news that you can kind of cover. So we do a lot of original content, but it doesn't sound like, you know, it's something that we should be kind of dropped in rankings. We've been on there for eight years, had really good rankings, followed Google's best practices. And then this year just, you know, we took quite a big hit and organic search traffic kind of, you know, went off of a cliff and it's really affected our company greatly and, you know, laying off people and stuff like that. In comparison when we took a step back and everyone was looking at our competitors, what they're doing and stuff like that and what we're trying to do original. And we just kind of felt it was a little bit unfair and justified and we're hoping if we could, you know, is there any way we can get, you know, some type of a manual review for a site? Is that possible? You know, because at the same time, you know, it could, with a ranking drop as heavy as it is at two thirds, you know, we're worried about going out of business, you know, when there's that much, you could imagine if Google lost two thirds of its traffic or revenues, how it would affect, you know, the company. So I was really hoping that, you know, what advice you could give me on how we can go about, you know, getting our rankings, you know, back to where we were for so long. Now, I don't know. I think that's always a hard situation and kind of hard for us to point at something specific. So we've looked at the site together with some of the ranking engineers as well. Okay. Because you have been sending all of these details, so that's really been useful. From what I still see from the engineers, so it's not that there's anything specific that you're doing wrong. It's essentially just a change in the general ecosystem and the way that we rank things. So that makes it really hard for us to say, you should be doing this differently or you should be doing more of this kind of articles or more of this kind of content. It's essentially kind of a natural shift that's been happening over a longer period of time where things have been kind of changing in search. But I'll definitely take a look at it again with some of the engineers here to see if there's something from our side that we might be able to point out where we could say, well, this is something that could help you a little bit at least or this is a direction that could make sense in these kind of cases. It's always really tricky with news websites because the kind of ranking depends a lot on what the current situation is. You can't say this article should always be ranking number one because next week a new news article comes out on that topic and maybe that ranks number one. And that makes it really kind of tricky to analyze these kind of news type websites. But it is something that we're looking into and it is something that has triggered a lot of discussions on our side with regards to what can we do to make shifts in kind of our ranking side less problematic for the websites that are out there because these changes in ranking, I think they will always happen but maybe there are ways that we can make them so that it's less painful for essentially legitimate websites that are shifting around in rankings. And I appreciate that because we've been around for eight years and worked with Google, that's where an Android site. So we worked with the team and we just found it kind of surprising when all of a sudden there was a huge change like that. So I do appreciate that you guys are looking into it. One other question that I do have in regards to original content, sometimes we'll do an original news story piece that one of our editors or writers have developed and another site will actually cover our news piece and they will source us as the originator of the news. And if you take a look at it in Google News or maybe in the search rankings, we'll actually rank higher for that story than we will and we're the originators of that content. So that was kind of a bit surprising that you would feel like the originators of that content should be the ones, at least ranking higher in the search engine rankings or Google News. And I know that's kind of a bit of a problem for some sites, but it looks like if you have more links or a higher ranking in Google, it seems that those sites will outrank you for your own original content, even though they source you as being the originator of the content. I'd like to hear what you have to feedback on that. If you have examples of that, I would love to have them to talk about that. Because that's something that we sometimes hear anecdotally, but it's really hard for us to point at something specific when we just know about it anecdotally. So if you have things like for this query, we're seeing this site that's writing about the article that we created ranking above us. That's something that's really useful to have to kind of discuss with the team. I did put it in the comments section, but I mean, because news does move so fast, today's news is old by tomorrow. Who would I contact to make aware of that? That's fine to put it in the Google Plus post, for example, or to send it to me in Google Plus. One thing also to just mention is that the Google News rankings tend to be different from the normal web search rankings. So sometimes they're just normal differences that are happening there too. Okay, thank you very much for answering my John question. I know you've probably seen enough of me, but I wanted to get to the bottom of a lot of this stuff. And we have a new site coming out and I'm taking a lot of your suggestions. So I really do appreciate the time and the effort that you've taken with me. Thank you. Thank you, John. Hi, John. This issue before was one of our sites. Okay. It's like one of big comparison of sites. So it's not scraping the listings from our site and the website. So the ranking hires on us because they have stronger brand than our site because it was in one of the countries we are not that big. So what actually did we stop this? We contacted the outside. We stopped the sharing of the listing with them. I think it took maybe two months and then we were ranking for our content. But it takes a bit of time. Yeah, it works. I think if they're really scraping then that's an option but if it's like one new site writing about an article that someone else writes, you can't tell them to stop writing about it because you're also happy that other people are writing about your articles. This is actually if you get mentioned. Yeah, no, they just is like scraping the whole landing page. So it's like a two identical page with cellular listings. 100% copy. Yeah, because some locations we are covering no one except our website covering this location. So we issue it in their website. It's exactly the same. It's like a duplicated landing page. But only the main content, not the supplementary content. Like only the listings? The listings. So it's not 100%? But it's exactly the same. It's like 100%. So our pages is with like an omitted search result. Like keep. Okay. And then after we stopped this we actually... Came back. Yeah, I think like if they're really scraping then that's definitely an approach. It sounds like in this case it's more like, I don't know, we have this problem all the time with our webmaster blog. It's like we put something on the webmaster blog and then Barry writes about it and he ranks. It's like, I don't know, it's like so unfair. You have no reason to have a chat. So unfair. But I mean for us it's not a problem because for us it's more about getting the information out there and if someone else ranks for that I think that's fine. But I know for other websites that can be quite a problem. Especially when you see like for those main queries maybe in the top stories carousel, you see the other website that's writing about your content, that's frustrating. It's very frustrating. In all this from experience, we mainly work with the e-commerce. So when some of our campaign goes viral and people pick it up, you cannot really educate publishers to put a canonical to the link or maybe they don't use the link, they just put the link to the homepage and they rank much higher than you will ever be on that subject. But that's no problem for any e-commerce. But for a publisher, we had that situation with the publisher. Where the main traffic is. Yeah, and that's really, that really hurts to see your piece of content, something like, I don't know, study 10 keywords and lots of effort put into it. And that's ranking number five and three news. That, yeah, it's really, so we do see that happen for publishers. But anyway, Google is trying already, like always to cut the middleman, right? Is that an option here? Like maybe it's just cutting the middleman and... I don't know about cutting the middleman. I mean, sometimes people do add more value by writing about a topic as well. So it's something where I wouldn't say like we should always point to the original. A really common situation is like a photographer, they have a photo on their webpage and they have like the settings they use to take their photo as a text and someone will take that photo and put it on a page, maybe write something about the area. Then of course, kind of that page with the extra content should be the one that's ranking. But I imagine there are lots of these situations, especially around content websites, news where you do have kind of similar articles, maybe they're even rewriting your article and mentioning your site, maybe linking to the homepage or maybe not even linking at all, just the name. And that's, I think that's frustrating. I think that's something that we should be doing a better job at, especially if that happens regularly. So Google does think that everything being the same and the query that the user uses would be related to both content pieces that the original content should theoretically, in most cases, be featured for. I don't know if we have a policy around that, but I think for many cases that makes sense. I think that's... So you do try to figure out the original source in some fashion. If we can figure that out. But new one can also be fresher, we can also add some value and outrank the original one. Now, I mean, there are also aspects, like maybe the original site is a really terrible website that has like 500 pop-ups and then you're like, well, this is the original, this is someone just writing about it, but this original is so terrible and we send someone to a version that is more readable. Yeah, it happened the other way around. We covered something from another site that was a terrible experience on the publisher's side. And we outranked them and we were sorry for it. But yeah. Oh, it's your fault. Yeah, it was not at all. Yeah, but with the publisher we worked on, there were no listen pop-ups, so ads in the... There were some ads, obviously, but not... They don't publish it. Yeah, like with all publishers, but not... Intrusive. Yeah, nothing intrusive. Yeah. OK, let me mention some of the questions that were submitted so that we don't ignore them completely. Let's see, mobile-first indexing, am I correct to assume that Google will now rank our desktop pages on how well Google perceives our mobile pages? Yes, that's correct. Can we see fluctuations in ranking now if Google C is seeing our mobile version as either less favorably than before? Theoretically, you could see fluctuations. I've also seen sites where the mobile version is actually ranking better, so that can also happen. Should we now be concentrating more on the mobile usability and content over the desktop now that it's syncing preference? I think for most sites, probably the mobile traffic will be bigger than the desktop traffic anyway. So even from a user's point of view, it makes sense to focus on the mobile version, too. How does Google decide when to enable mobile-first indexing? I think we talked about this briefly before. All right, another question. Can we do more site clinics? Sure, I guess we can do that. It's always a bit tricky to do them live because it feels like I'm digging for some random thing on a website where you've been looking at your website for months, and I'm supposed to find the real problems within minutes. I don't know if that is really so useful for people. We have in our Search Console reports some URLs which are indexed, but blocked by robots text. Is this wasting crawl budget? No, if we can't crawl them because they're blocked by robots text, then that means we don't have to crawl them. I noticed Google suddenly dropped from the index 25% of our how-to articles from our DIY sites, what could be going on here, and why. I don't know. So we don't promise to index everything. That's one thing that could be happening, that maybe we're just indexing less. It could also be that there are maybe some technical issues with regards to those pages. In Search Console, you should be able to see some information on why we dropped URLs from the index. I understand having content and tabs will no longer be discounted on a mobile site. Does the same apply for content on desktop? Yes, with mobile-first indexing, if it's in a tab, then we will count that normally. Does Googlebot use a website's internal search functionality for finding pages? And does a quality of internal search help with rankings? For the most part, we don't use the internal search, because Googlebot doesn't really know what to search for on a website. There are some websites that we can't crawl at all other than using the internal search, and that's where our algorithms will try to recognize, oh, there's probably a lot of content here, but I can't crawl to it. So maybe we should find a way to work with the internal search. And that can happen. It's pretty rare, though. And you should ideally aim for us to be able to crawl your website normally. If I have a website about Hamsters, and I've written, I've set up my CMS to put Hamster as the first word on all title pages, can this contribute to receiving an algorithmic penalty? Or will the quality of the site in Google's eyes be in any way affected by this decision? I think that sounds kind of like a weird setup, but probably, for the most part, we should be able to deal with that. So if you have specific words in your title that are repeated across your whole website, I assume our title algorithms will spot that and just say, well, this is not really that critical here. And sometimes you see that also in the titles that we show in the search results. It's Hamster, not Hamster. Oh, I thought it was Hamster. I was about to say use cats, and that's OK, because I read it as well. Hamster. I don't know what a hamster is, but OK. Right. So it's basically like you do with supplemental content within the site. I mean, I would try to make the pages naturally focus on those topics rather than just to artificially push keywords into the pages. Google will sometimes decide that, well, this title tag isn't that useful for users. And they write it. We'll just use what we think is best. Sometimes that works well. Sometimes I get complaints, so it can happen. OK, and then there's a long question about hreflang set up. Let's see. I probably have to take a look at that in detail. The question about hreflang set up, if you have a forum thread that you can link to from the comments on the Google Plus post, that would be useful. Then I can take a look at the details there. I implemented lazy loading on my site when I use Fetch as Google and Webmaster Tools the images don't load in the Googlebot version, but they load for the visitor version. Is this cause for concern? I see an empty box with the anchor text where the image should be, maybe, maybe. So I think first of all you have to consider is are these images something that you need to have indexed? Do you need them for image search? Do you expect traffic through image search? A lot of times images on a page are more for decoration than anything else. And if they're more for decoration or for layout or things like logos and, I don't know, maybe screenshots that you don't need to have indexed in image search, then if they can't be loaded for lazy loading, that can be perfectly fine. If you do want those images findable in image search, then you should probably make sure that we can't actually load them. We have some documentation coming out, hopefully, real soon now on lazy loading and how to do that best. The general approach that we gave at I.O. was to use no script tag with a normal image tag in there. That's something that you could use here. I don't know if you would see that directly in fetch and render as Google, but we would be able to pick that up and saying there's an image here and it's using lazy loading, so we'll just load it whenever we need it. After the removal of the public submit URL to Google Tool, I noticed Googlebot is taking too long to index a third party page. Are there any plans to bring back this tool? I know it was overused by many, but as a legitimate white hat, I.O. this tool helped me to get citations indexed a lot faster. I don't know what the plans are there. I recently saw a blog post from Bing saying they removed this tool from their systems as well. So I don't know what that means. If they're seeing a lot of abuse and problems with this too, it might be tricky for us to bring something like this back. Fetch as Google and Webmaster Tools or Search Console doesn't load the full length of one of my pages. This page happens to be the longest page on my site. The page loading stops approximately three quarters of a way down. When I checked the Google cache, the full page is there. Is there a limit to the page size or length when Google crawls? Yes, there's a limit to the size when we crawl, but it's, I think, a couple hundred megabyte HTML, which should be enough for most pages. The Fetch as Google tool has a tighter limit there, mostly so that we can bring your response back fairly quickly so that we don't have to wait for everything to be downloaded. So if you're seeing the full content in the cache, if you can search for parts of the text that's lower on the page and Google and find that page, then you should be all set. That should be fine. Let's see. Structured data question, website with over 150 million pages, updated structured data from microdata to JSON-LD. Three months ago, Google tools to test and optimize everything is exactly as Google suggests. However, we lost 98% of the structured data on Google and a loss of over 40 million pages from the index. What's up with that? So if you're losing pages from indexing, that wouldn't be related to how you use structured data. That sounds like something completely separate. It does worry me if you're losing a lot of pages from indexing that maybe there's some technical reason behind that with regards to also how we can pick up the structured data. Because if we can't index those pages, we can't use the structured data there. So I would, as a first step, try to figure out what is happening with those 40 million pages. Are they really critical pages for your site? What happened with them? Like, check out in the new Search Console. You can use the Inspect URL tool to see, does Google know about these pages? Did Google decide just not to index them? Was there maybe a technical issue why we couldn't index them? All of that should be fairly solvable in the new Search Console. Obviously, with 40 million pages, you have to find some representative sample pages. You can't test them all. But that's what I would mostly aim for. But definitely, the change from microdata to JSON-LD would not result in pages being dropped from the index. OK. Whoa, so many questions left. I don't think we'll make it through more of these. But maybe if some of you here have any questions, we'll jump over to things from you all. Yes? Hi, John. Hi. I have a feeling I can hear double. OK. Now, I have a quick question. It's about the web spam team. How does the web spam team process when there's a manual action? Are there specialists? Or are there normal people who are? OK. The problem is I have sent some DMCA requests. And they just didn't solve the pictures on the pages of the other web sites. And they always send me back that there doesn't exist a picture on the other side. So I don't think there are specialists. OK. But that sounds like for the DMCA complaint, that's something that is generally processed by different teams, so not by the web spam team, but rather by people who review these legal requests. And these are people who deal with legal requests essentially the whole time. So they should have a fairly good understanding of the web and of the kind of requests that come in. So it's hard to say exactly why they would be kind of pushing that back. OK. And how much time they have for processing a request the web spam team? So is there a good time or do they have only two minutes for processing one request? As far as I know, they have enough time to review these. So we do get a lot of requests, especially the DMCA complaints. We get a lot of that. Again, that's not handled by the web spam team. But it's something where on the one hand, we use tools to try to figure out what we can solve automatically. And then we manually review the things that otherwise come in. But as with any manual review, it might be that one person looks at it and doesn't notice the big issue that someone else would see when they do look at it. So sometimes it makes sense to try that again or to be more elaborate in the description of what you're submitting. If you're seeing this regularly, it's very obvious that this is a copy and you're the original. And you can't get that to work with the process. You can also send that to me. And I can pass that on to the team to have them look at in a little bit more detail. So I can't promise that they would be able to switch that over. But at least we can kind of discuss to see is something wrong happening here or is this essentially the way that it should be happening. And I don't know the details of how these DMCA complaints are judged. That's something that we try to do on a legal basis. And that might be a little bit different than how a web spam team might look at a website and say, well, this is similar or not. OK, thank you. Hi, John. Hi, John. Hi, everyone. Hi, everyone. OK. So I have a few questions about such counsel. OK, so all websites are related to health. And as we know, health has bigger effects in this change because Google have released the algorithm focused on this year, right? OK, so unlike it, all sites have bigger influence too. So in such counsel, all ranking has a bigger decrease in front of August. And also provide many important and useful articles about health for people. And there is taking, contain, all the data in every article to increase trustee. And many doctors write articles posted in this site to share their recommendation or observation to our people. So we want to know how we improve this best situation. And there is one confusion to us because in our country, there are similar websites related to health too. And we think our article structure is similar, but their ranking didn't have a decrease. So we find all difference between us is let have a searching page to search hospital, clinical, pharmacy, or doctors. So is this helpful to enhance ranking in search? Because Google algorithm consider these websites have many trustworthy and useful information in these websites. So we confuse and we want to know how to enhance the ranking in our website. No, I have a lot of trouble trying to understand exactly what you mean. But in general, what I would do in a case like this is post the details that you're seeing in the Webmaster Help Forum so that people can try that out and look at your pages in detail to see, is there something specific that we could recommend there? And usually, also, if you post in the Webmaster Forum, then if there are issues that the people there can't solve and they can say, this is so confusing. I don't know why Google is doing it like this. They can pass it on to us at Google as well so that we can take a look at the forum thread a little bit too. But it sounds like maybe there are some things that are specific to your website or to competitors that you're seeing there. And for that, it really makes sense to look at the details. It's really hard to give a general advice and say, well, website should be doing this now because it's such a broad scope of things. But folks in the Webmaster Help Forum are pretty friendly and helpful. Mihai helps out there too. So maybe you'll run into him. But that's kind of where I would go for these kind of site specific questions where you're seeing changes in ranking. OK. So I have another question. So because the search page in the Web, they just use dynamical URL like insert different variables after URL. So we think maybe that has duplicate pages for Google. But they also insert a different canonical tool every page for search results. So is Google considered that are still different pages or duplicate page? We should be able to recognize that. So URL parameters, if you have them there, that's something where, over time, we will try to learn which parameters are important. And with your rel canonical on those pages, you give us information about that as well. So that's something that should be no problem. OK. Thank you. Thank you so much. All right, maybe we'll take a last question from you all here in the room. If there's anything. Yeah, I won't. OK, go for it. It's the mobile site header. So in the header of the mobile site, we use redirect for the user agent. If the user is just mobile, so it's redirect. But in the browser, we don't use it. We don't use redirect. So if you try to access the mobile site from Chrome or Firefox, it will not redirect. Is it fine to use it? That's fine. Related for those that are Asian, but not for users? Or we should be making it equal? It should be equal. So it should be based on the type of device if you do something like that. I think that's generally a good practice if you recognize a mobile user is going to the desktop page to redirect them to the mobile page. Because that's kind of how responsive design also works and that it tries to bring the version that's suited for the device. So that, I think that's OK. It should also be in our guidelines for mobile-friendly websites. I think we have a section on it. Yeah, do you recommend it as 3.0.2 instead of 3.0.1? That's 3.0.1, but I think that doesn't really matter. Because we recommend 3.0.2 because it's a temporary redirect. That means if the redirect is otherwise cached, somewhere in the line to the user, when they switch to a desktop, then they don't get a redirect again. That's kind of the difference. It's not that page rank is being dropped or anything crazy. Sometimes people worry too much about 3.0.1 or 3.0.2. Yeah, but so 3.0.2s are generally the best practice when it comes to redirecting users based on their device. Yeah, yeah. Cool, OK. So let me pause the recording here. Thank you all for joining in. It was good to have you here in person. Thanks for coming all the way over. It was fantastic. Thanks for having us. Yeah, cool. I'll set up the next batch of Hangouts probably today or early next week. So if anything didn't get answered that you submitted, feel free to copy that over. Or, of course, jump by the Webmaster Help Forum where people can help to kind of solve these problems as well and maybe give you some tips in general on what to improve on your website. All right, so with that, let's take a break here. I wish you all a great weekend. Thanks for joining in and hope to see you again next time. Bye, everyone. Bye. Bye.