 Hey everybody, okay, time to introduce our next speaker. So MozCon virtual, really cool, best virtual conference out there. But we were originally gonna have MozCon in person in Seattle. There are a few people coming in from different parts of the world that I really wanted to meet. Our next speaker is one of those people, Izzy Smith. I've never met Izzy in real life. If you follow her on Twitter, which you should. She has one of the best SEO Twitter games going on other than me, of course. But she is, she's great. She used to do SEO for a sixth, car rental company. And now she works for Wright, the SEO tools company, doing amazing work, showing people different things about Google. She's talking about something today that's near and dear to my heart, which is click through rate. I'm not gonna get into it. I write a lot about this, so I'm excited to watch your presentation. Let's kick it over to Izzy. So hello everyone, and welcome to my talk all about how to be ahead of the CCR curve. And good morning, Moise Kahn. I'm unfortunately very sad that I can't be there in person, but it's a huge honor to be speaking here and in front of you all today. And of course, the pleasure of attending a physical conference is shadowed by many sacrifices we've had to make. We've all been through a lot recently, no matter in which situation you've been. So it's very important to celebrate little successes. And personally, I've been trying my best as much as I can during the situation. I'm sure you have too. Holding it together for the people around me, my family back home, my colleagues, and my community. As well, and in our work life, we've had to rapidly adapt in certain cases. I've spoken with good friends who have agencies, good friends who are consultants, who lost clients, unfortunately, and even in-houses who have lost their jobs. So we're really rapidly changing our priorities here in our SEO work. For example, at Writes, we've been doing a lot of analysis into GSC impression data to uncover which industries are performing well, which are struggling so that we can feed this information and help our clients out with action plans. And yeah, our time spent on SEO tasks is becoming even more precious. You've probably noticed this as well. We're all scrambling to get the best jobs done with limited time, unfortunately. A lot of copy and pasting or a lot of manual tasks, we're trying to do our best as we normally can, but with shorter time and with much more stranger circumstances. Am I right? Maybe you agree, I'm sure you will. And yeah, we're all striving to spend our time and knowledge in order to achieve fantastic rankings for our clients and for our business fire, things like relevant content, having great technical prowess, high user experience and so on. But maybe sometimes we're losing sight of one of the most valuable KPIs. Are we focusing on what this all boils down to? Something that's indicative of everything we strive for in SEO, valuable user experiences, meaningful engagements and so on. So the humble click, the KPI, it all comes down to. But of course, not all clicks are equal than all created equal. So the better KPI we need to be aspiring for is the engage click. What is a click without the intent to be there? Sorry, and to prolong their visit and their custom. Also known as the long click, something we should be aspiring to. Not only does it result in more value for your business, but it's the type of qualitative feedback that you can directly provide to search engines to prove that you deserve to be ranking well or higher than before. So an excerpt from Steve Levy's book puts it best. To paraphrase Tolstoy, happy users were the same. The best sign of their happiness was the long click. Now, this is a pattern widely analyzed by SEOs and data scientists, computer scientists alike. So I'd like to reference it quite a lot. And also in this talk, I mention it quite often. Modifying search results, ranking based on implicit user feedback. Let me read you some boring slides. Don't worry, there'll be memes and funny stuff later. But it's important to get this context across. So user reactions to particular search results or search result lists may be gauged so that results on which users often click will receive a higher ranking. There are also short clicks. A short click can be considered indicative of a poor page and thus given a low weight. A medium click and so on can be considered indicative of a potentially good page or an okay page and given a slightly higher weight like 0.5. And a long click can be seen as a great page and given a much higher weight. Finally, there is the last click when the user doesn't return to the main page, to the main SERP. And this can be considered as likely indicative of a good, okay page and given a fairly high weight, for example, 0.9. Of course, it depends. Like everything in SEO, a short click can be indicative of quality also if someone is directly looking for a quick piece of information. But remember, Google is smart enough to answer those queries themselves. Crews that were resolved in a positive short click, positive to you and your brand, will simply all be satisfied by Google anyway. If not now, it will be later. So all of that aside, where do we start with analysing and improving the quality and quantity of our engaged clicks? Those positive long clicks. For that, we need to measure our organic clicks and their patterns effectively. Effectively? Effectively. And get to work rapidly. And we need, for that, the best tool in our inventory. Of course, I'm talking about the wonderful Google Search Console, which is home to the most powerful dataset that us as SEOs or as webmasters, business owners, is the most powerful dataset we have. But it's all too often that the interface highly limits our potential of slicing and dicing the data to our needs, of drilling down, of filtering, of seeing capabilities and the full picture past that 1,000 rows of initial information. So using the Google Search Console manually for the search performance won't really help you get the full extent out of it. So I can only recommend to build a more actionable performance dashboard. Pull in that data via the GSC API. For example, Aleda Solis, the wonderful Aleda has a great template if you're struggling to get started. Of course, I'm personally using Right Search Access. It's the most easiest and efficient way to analyze all of that GSC data. But the point is you need to turn all of that wonderful data into a masterpiece. And yeah, myself and my colleagues, Alex and Marcus, the technical SEO team, write out the best job in the world because we have 33 terabytes of GSC search performance data to sift through, to analyze, to find those patterns, to detect what is going on. It's like a veritable treasure trove of SEO nerdy data. I love it. And that's around 5,000 connected accounts, which is pretty exciting. And this usually makes me think I am a data scientist, but I'm not. Yeah, maybe you aren't either. You're probably not. But luckily I have those in-house and that doesn't even matter anyway. You don't need to be. That's okay. So whether you beautify your precious data using Google Data Studio with help from a latest wonderful template, maybe you have your own in-house built solutions, or also Hannah Rampton, she's fantastic. Very recently ramped up her GSC connector in Google Sheets. Please check out that template as well. It's, I forgot to link it. Damn it. Oh, there's a link. Yeah. Or if you have tons and tons and tons of data with limited time, limited time, a tool like search success. But however you use it, just use it. It doesn't really matter what tool you're using, how you pull in that data, as long as you're using that real GSC data for analyzing your long clicks and your impressions. Because as powerful as they may be, ranked trackers won't always cut it due to the fact that you're just looking at a ranking position snapshot. Only in the search console, you can really see what happened, can investigate. It's your data, you already have access to it. It's free data, why wouldn't you use it? And it's reliable data. It's purely from Google itself, the beast that we are targeting, that we're working for. We're not working for them directly, of course, but you know what I mean, you know what I mean. So use it, but use it wisely. I can't stress that enough. With great data comes great responsibilities. You have to know what you're looking for. Don't get lost down these tracks of average positions, misleading click-through rates, or high impressions that give you false hope. Remember, yeah, I can't stress that enough, that our time is more precious than ever. So we have to use it as smartly as we can. So today, I'm gonna be showing you some crucial patterns to look out for in your search performance data, that you can find with your own GSC accounts for your own business, for your own clients, to be really aware of what is going on there, how you can detect things like your meaningful long clicks and keep uplifting them for even more growth. I'm using real-life GSC data. I've got some really nice, juicy examples for you. Of course, in many places, it's anonymized to protect the clients that I work with. So that you yourself can strive to be ahead of that CTR curve, to keep driving in those engaged clicks and to be proving to Google that you deserve to be ranking well. So CTR, are you ready? With my talks, you can always expect cheesy puns and bad SEO memes. So you better be. Number one, let's get right to it. Performance-based core update impacts. I added this first as it's been something that myself and my colleagues have been recently working on, assisting our wonderful clients on uncovering where and how they were hit by Google's May core update. Of course, industry experts like the fantastic Dr. Pete have been working hard on analyzing your bulk data. But as he mentions quite often in this article, no one metric can ever tell the whole story. So you need to sift through your own search-reforms data, truly analyze what is going on there, and try to get as close to the reason as possible. So my analyses of the recent core update and of the previous ones, my impact reasoning fell into three specific buckets. So relevancy, Google has found your domain to be ranking for irrelevant queries that you were unable to satisfy unless you lost those rankings. I don't know, I see that as a bad thing. I see it as Google properly ranking for my website for keywords that it deserves to be in traffic for. It's a slight adjustment. It could be EAT, we hear that very often, expertise, authoritativeness, and trust. It's not always the sole reason for core update impacts. But in some cases, you could have been seen by Google as having a low EAT scoring within your industry. Google has demoted your website to improve the quality and reliability of their results as they do on a very ongoing basis. Or it could be underperformance. Google might have tested your page for specific keyword rankings that did not perform as well as expected. This could be a huge shift in where Google demotes you and they see that this is a sign that your results are irrelevant, low quality, or maybe just in need of some sprucing. I love that word, sprucing. Or it could be something else completely, like a reversal of a previous core update or something else. It could be factors from all of them. It's usually quite hard to see. So let's take this health magazine as an example. Around June 2019, that core update, or was it May? May, yeah. They saw a drop of 60% clicks. So a huge dramatic loss there. So one month before the update, non-branded click-through rate was performing incredibly well. So just in that row there, you can see some really juicy click-through rates for high-impression keywords. But the core update recognized low EAT signals for such a high money-on-your-life required industry and kicked them down. There were things like no expert authors and no real authorship at all. So they were linking to generic journalist pages that had nothing to do with medicine and they were themselves giving advice on very serious matters. So really not proving their expertise. Now, this is a kind of normal example of EAT. I'm not gonna be talking about that so much today, but I created a template to guide you into the right direction of improving your EAT depending on specific verticals that you may or may not work in. Check it out. I hope you like it. I love a good Trello board. But today I wanna point out something that we can analyze better without GSC data. So underperformance factors getting us down. This can happen maybe gradually over time or in one fell swoop of a core update. There we go. So this is what a pretty typical core update looks like. You can see clearly the drop in impressions and clicks as well as the number of ranking keywords. It is especially this drop in the number of ranking keywords which is important. Because around Christmas you can observe a similar drop in impressions and clicks, but here the number of keywords stay the same and even slightly increased like it did. So people were still searching for the same keywords just in lower volumes. Let's take a look at some individual keywords. So I've gathered around three or four of their focus keywords that they really try and optimize for. So these have directly been impacted by the core update. We can see that very clearly, the position tanks and with it impressions. You can also see for the same keyword how the site got tested within the top 10 for about 10 weeks. But during that time, the website didn't get a lot of clicks for this very lucrative click worthy keyword. So just 273 clicks out of 125,000 searches so it got pushed onto page two again. Exact same behavior with another affected keyword, great focus keyword of theirs, completely tanked in impressions and position. And another pretty juicy one, which failed to entice users to click through to their website. So a CTR is 0.1%. All which were adamantly clear signs to Big G, Big G, Big Google, but they were not worthy. They were not a correct fit. And so, yeah. So pattern number two is knowing when you've lost and when your hard precious time is spent elsewhere. In other words, identifying irrelevant rankings and inhospitable, I call it inhospitable, because it's very hard to survive on those search engine result pages as an organic page. So some examples from our own domain now. So we have the right wiki, which is like a lexicon of many specific online marketing terms. So a clear navigational query thing. As you can see, it's quite funny. A lot of people search for Bing on Google. You can clearly see the top 10 tests always with a nice corresponding uplift in impressions. That's quite a big increase there. But of course, we didn't get that many clicks through. We have a very low, very close to 0% click through rate. So we can't establish ourselves properly in the top 10. And therefore we're now on page five, which is well-deserved. There's no need to take action here. This is really just optimizing for third-party visibility metrics, for ranking for those lucrative impression terms. And next up, a really typical definition intent keyword SERP. People looking probably to understand what SERP stands for, search engine result page. It's difficult to get any meaningful traffic or meaningful amount of traffic via this keyword beyond that feature snippet that is there. So first, we got qualified for the test on page two after applying some on-page optimizations, getting those classic ranking factors in place. Then we were tested on the top 10 for a couple of months. The length of these tests depend on the number of searches because Google will likely need to ensure statistical relevance. Revolence. Relevance. It's hard to talk. But yet again, in this case, we can't entice users to click through to our website because that intent is simply not there. So we got pushed down to page three, down. We're going for good here. But since the keyword isn't really that juicy for us as a wiki page, we don't need to act. So what do we do about it? Nothing, nothing. The underperforming CTL was only proved that our wiki pages are simply not a purely, a wholly relevant result for those pure answer intent or branded navigational queries like Bing, for example. So using our precious time, we focus our attention elsewhere on results that we have the power to influence and that will hopefully have a higher click event, invent intent, which takes me to my next example. For example, yeah. And what I like to do also before I go into the next example is cluster those inhospitable SERPs. For example, if it's like pure entity keywords, named entities, I group them in one segment and I filter that out of my general reporting, general analysis and uplift, yeah. Growth analysis, discovery, sorry. So yeah, when I include this segment here, that's over half of all my impressions but less than 1% of clicks. So I don't wanna get blindsided by those queries. So next point, leverage SERP features to your advantage in order to drive even more clicks to your website. So a keyword that we actually acted upon seeing as the research intent was there, research is kind of like more informational intent but there will be some kind of necessity to click through to your website to understand more. That's a nice example. It's about meta keywords. So people, yes, are still searching for meta keywords which is quite interesting. And you can clearly see when we were first tested in the top 10, right around that stage there. But the average CTR during this test was not high enough. So we bounced back down to page three, bye-bye. But what we did was we applied again some on-page optimizations. We clawed our way back up to the top 10. Yeah, again, it's very clear to see that there. But this time, we identified a new opportunity. We took action and optimized the snippet as well as the direct answer for the most dominating phrase, what are meta keywords? Using that to, using the URL analysis to uncover the most lucrative keywords it was ranking for. And then leading to a significant rise, you see there, yeah, of the CTR and even claiming the feature snippet. So it's a really nice feature snippet that we won there with the images as well. So it was also caused by our amazing Wiki because building a collective resource assists with your large-scale feature snippet earning because Google can see that you're a collective powerhouse of this knowledge. You're an expert on that subject. So, yeah, very powerful. I have spoken quite a lot about feature snippets and this was a photo I really liked because it's quite meme-worthy on how you can win feature snippets so becoming a well-structured, engaging and satisfying resource with high alt with relevant authority and high accessibility. But 2020 was a bad year for feature snippets. So I kind of stopped. But 2020 was a bad year for a lot of things. And that's mainly because of the deduplication of feature snippets gate, as I like to call it, where, actually, I think it was Dr. Pete who first noticed it, that if a feature snippet was present on the SERP, then its standard result was taken away. And I find it quite funny that Donnie said, this declutters the results and helps users locate relevant information more easily, which is very cheeky because we have very ad-heavy SERPs that completely blend in with organic results. What? Okay, anyway. So from position zero to position one. That's an example of how it looked in December before the deduplication and then January afterwards. So we were ranking on position zero and one and now only with that position one feature snippet. And after that, a lot of SEOs were like, well, now I'm not doing it. And I did an analysis. I want to see how our feature snippets were performing. And it turns out our high position CTR for feature snippets is actually slightly lower than the standard results. So the feature snippets weren't performing as well. But to be fair, many of our feature snippets are for high informational intent queries. And most of them weren't standing out very well on desktop because there were no images. Something that we're working on incorporating so all those helpful diagrams to increase click wordiness. So I did another analysis with a client's website that had a lot of transactional intent feature snippets and they had a much better CTR with their feature snippet than without the high ranking result, the standard result. And here's another example of why it's so important to claim feature snippets when the query offers one. So this page is always on position number one. But it's a big impact if that page is getting the feature snippet or just a result beneath it. That is a 20% CTR versus a 2% CTR difference. I know what I want. So is it still worth it? Yes, yes it is. And even if there is a purely answer intent query, there are some amazing benefits. So our ranking competitors showing that brand and trust value that Google has in you that is hopefully gonna uplift conversions down the line and show you as an expert in the subject. Also helpful for conversational search, for voice search, but bear in mind that there could be some drawbacks as well, such as on-set satisfaction, reducing your click-through rate. Sometimes Google misinterprets a query and provides the wrong result. And sometimes people think that feature snippets look like ads, so they don't click on them. And as well as feature snippets, there are tons of set features we can take advantage of for our organic results to drive more traffic. So MozPro helps you really detect those SERP features, such as local packs if you're a brick-and-mortar business, having review snippets, related questions, and site links. And there are also SERP features that may not always drive clicks, but definitely assist you with brand awareness and being present in entity search results. And there are some risks of using structured data, which usually powers SERP features like this, like this knowledge panel results here of when MozCon is. But if you're not providing, if you're not feeding Google with that structured data, you risk being included in those knowledge card results that display that direct data. And there are some other advantages. You wanna be represented in those features rather than excluded. You wanna be in control of those facts rather than providing people with misinformation. And if Google is taking their own answers, that can happen. And also, on SERP satisfaction of those direct answer intent queries is likely more useful than someone visiting your website and quickly going back to the search results. So my fourth point is knowing when Google is testing you in the top 10 and exceed those expectations. Sometimes Google is promoting you to page one to receive that user feedback and evaluate position. Then they can understand if you're the best page for the job. So another great example of web architecture, this one is from our website. We identified via topic analysis and started to really work on it. And we got bumped up to the top 10 after some hygiene and some on-page optimization and then moved up to eight to four. We got tested again quite successfully. And at this stage, it's getting to more be like, who is the reigning queen? Is this site even a top position one contender? And it turns out even after a couple of days later, we were so great that we won also a feature snippet. It took us around three months to get from not ranking at all from identifying that keyword and then becoming the feature snippet. But of course, it's not just about the CTR. The page also needs to fulfill that intent of the searcher. Oh yeah, this was, I had to show you the feature snippet because it has those panel links underneath it. It looks really sexy there. And it brings in gauge clicks, which it seems to do quite well. It has a very good average on time of five and a half minutes on the page, which takes me gorgeously full circle back to the beginning, to the long click. Or the last click, search intent successfully is fulfilled. That could be a little bonus on top. And it's not about whole pages. It's about the different kinds of information that they hold. So in this case, you can see the pages ranking for over 1,500 different keywords. Yeah, quite a lot. And Google is tweaking the keyword rankings continuously. So when I compare these two specific keywords, the same pages ranking for, you can see how that one keyword with a very good CTR rose up to be the number one result while another keyword ranking got demoted because of the poor CTR. And some of you might see all this information and start wondering how you can manipulate it and cheat the system, but don't because it won't work. I keep seeing businesses like this who guarantee clicks are a ranking factor, a direct ranking factor, and they will employ real humans to click through to your website, which is, you know, that's why we kind of nice things. And they based all of this on Rand's study, his experiment back in 2014, the very famous one where he asked the audience to click on position number seven, and after a few hours, it got up to number one. But after some time, it reverted back. Going back to that patent again, there are safeguards against spammers, users who generate fraudulent clicks in an attempt to boost certain search results can be taken to ensure that the user selection data is meaningful even when very little data is available for a given query. And now they go on to explain and I'm running out of time, so I'll wrap it up a little bit. They can describe how a user should behave over time. So long story short, they can be aware of when this is being manipulated. We even tested it ourselves with one of Marcus's websites. So this keyword in German, sorry, is job advertisements. In the beginning, there was a normal CTR for the position, so 2%. Then for position nine, it hit 37% for position 10, it hit 57%. And remember, this is job offers, so really lucrative high volume keyword, and then Google was like, nah, this is going wrong and kicked us down to position 77. So, okay, party people, sorry, I'm a bit over time, let's wrap it up. For crucial search performance analysis that you own use that GSC data. When it comes to your own site's wellbeing, utilize it as best as you can. To give that definitive support to your hypotheses, to your understandings, and so on, but turn it into a masterpiece. Use it wisely. Remember, your time is precious. So utilize segments. So for example, I like to use informational intent keywords, segments using questions, and also keep filtering down to find those underperforming keywords that need some love and attention. Here's an example I can skip through. Be aware of when Google is testing you in the top 10 and act accordingly so you can do things like setting up alerts that can trigger every time a focus keyword falls above onto page one or it's dropping and how you can act accordingly. Keep standing out as well in those search spaces. Aim for those engage long clicks. Strive for positive last clicks. Doing all in your power to satisfy those users, creating those enticing features. And keep combining those skills with fabulous user experiences that will result in really positive uplifts with your brand association around your product. Keep proving your worth in the serfs. Stay strong, everyone. I miss you all. Sending my love from Munich. Thank you. Thank you so much for listening and keep in touch. I'm Izzy Smith. Thank you, Moscow. Bye.