 So, the Moz Crew asked me here to speak about a subject that is something I'm really passionate about, something I'm very, very excited to be talking to you about today. Basketball. Technically, they asked me to come and talk about the future of SEO, and we'll try and get on to that. But we're going to start with basketball. So this is a shot chart. This is the 200 most frequent locations for shots taken in the NBA in professional basketball. And you can see a number of different, like, distinct regions on here. So you can see the dunks and the layups around the rim. You can see the three-point shots from outside the arc. And you can see what's called the mid-range, which is kind of all the jump shots in between. And some of the most famous iconic shots of all time from the mid-range. This is Michael Jordan winning an NBA championship from there. But I'm slightly misleading you, actually, because this is what the game looked like in 2001. It looks like this. Today, the game looks like this. Let's just look at those one by one. It's incredible. That is the difference. This is what the game looks like in the modern era. So why is that? What's been going on? Well, it started with the installation of advanced camera systems in every NBA arena, coupled with a whole load of machine learning, artificial intelligence technology that enabled those cameras to track every single player and the ball throughout every single game in the season. And a player advanced statistics on top of it. So it's not just tracking each individual. It can also identify match-ups, how closely guarded someone is when they shoot, if they're in a double team, which way they dribbled, all that kind of stuff. And by crunching this data and looking at these numbers in detail, the analysts were able to establish that the mid-range was less valuable than everyone had realized. That you might, you know, they had some basic stats beforehand, but they thought that often folks were missing the mid-range jump shots because they were closely guarded or whatever else. But it turns out they're just really hard. And the numbers show that they're almost as hard as the three-point shots, but worth much less. So this moves into a new era where you still see the dunks and the layups, right? Those are easy enough to be worth taking, even though they're only worth two points. And the three-point shots worth, you know, 50% more are harder, but because they're worth so much more, they're still worth taking. But that mid-range is deceptive. It's tempting, but it is often the wrong choice. So I suppose, okay, fine. We should talk a little bit about SEO. Okay. So what's the SEO analytics revolution in our industry? What's coming to SEO? Well, I don't think it's cameras. I don't think it's going to be like, you know, people watching us type faster. I think it comes from the world of testing. And you're probably familiar with conversion rate optimization testing, focusing on getting more out of the funnel, right? So same amount of traffic, more conversions, more revenue, more business. And SEO testing is a similar concept, but focusing on growing the top of the funnel, getting more organic search visibility. And so what SEO testing lets us do is it lets us take individual on-site changes. So we have a hypothesis. This particular one is taking templated content that was just boilerplate across a whole big site section and replacing it with custom written unique content for every single page. It lets us take an individual change like this and measure the specific impact of that change with statistical accuracy, confidence bar, all this kind of stuff. And this particular one was a big win. This was a 14% lift in organic traffic. And you can even take this to the next level and not just measure traffic, but all the way to revenue and actual business impact, what we call full funnel testing. So back to the basketball analogy. This basically means that we can take more of these, more of the easy shots. In my analogy, the dunks are the winning tests. You have a winning test, very likely to perform well. You roll it out. I'm going to maybe slightly overstretch this analogy and say the three point shots, this is new content, new pages, new site sections, hard, harder than just tweaking stuff you've already got on your website, but worth a lot, not the subject of my presentation. We have some great experts talking about this. But the insight here is similar to the MBA one to say that we should be trying to stop taking these deceptively attractive but surprisingly low percentage shots, which are the untested onsite changes. Because I don't know about you, but I've been in this situation a ton of times, right? You got your crossover, which there with my analogy, this is like you find an amazing keyword research insight. You've done your keyword research. You find something you're really excited about, and the ankles are gone. Very exciting. You pull up for your jump shot, which in my analogy, we're modifying titles and meta information on the page based off the keyword research insight that we discovered. And herbal, so this is an anonymity, this is not the website we ran this on, but this is an example of a real test that we ran based off some research of how people actually search in this market, making a change based on that research, which all of us have recommended at a zillion times. In the process, we also cut out some repetition and extra spammy kind of keywords and so forth. And wow, that is an SEO airball, minus 27% organic traffic. So let's just back up a second and run through how you run these kind of SEO tests. So it works on large websites. We mainly work with very large websites in industries like e-commerce, travel, real estate, jobs, listings, local pages, where there's tons of pages in what we call a site section. And they're scalable, they get tons of traffic, and they are set up in a similar way. So this is obviously a small example showing trips, flights to different cities. And then you have your hypothesis, the change that you want to make to these sites, to the site, and you think this is a good idea. You think this is going to result in more visibility, more organic search traffic. And what you then need to do is you need to pick a control group of pages and a group of variant pages from within that site section that I just identified. And our platform does this automatically, but all we're really doing here is we're looking for groups of statistically similar pages, a group that are going to stay unchanged as the control, and a group of pages that are going to get the hypothesis applied to them. So during the test, the website goes from looking like this to like that, where the pages inside the pink box are the variant pages, and they have that change applied to them. And the rest of the pages are the control, and they stay as they were. Now, I'm not going to get deep into this. There's more on the website if you want to read the technical details. But we can then actually go a level beyond this, and the full tunnel testing I mentioned that connects to revenue. You can cookie users and have it so that each individual user gets a consistent site-wide experience if you want to. But ultimately, this is the building block of an SEO test. And then use a load of advanced statistics, we have a machine learning model that analyzes the analytics data that comes out of this, and says, with the statistical confidence that I've shown in those charts, this was a good idea, how much for an uplift it is, and connects it to the real business benefit. So this is a little different, as I said, to the user-centric testing that you might be familiar with from conversion rate optimization or similar. During a conversion rate test, you put the users into buckets. So some visitors to the website are in group A, some visitors to the website are in group B, and they get a different experience. In an SEO test, it's the pages that are put into those two groups. So the pages are allocated either group A or group B, and treated accordingly. And up to now, we just haven't had this kind of data, just like in the NBA 20 years ago. When I got started a very long time ago in the industry, I used to contribute to things like the, as it was at the time, SEO Moz ranking factor surveys, asking me and a bunch of other people what we thought worked and didn't. It was kind of fun, but no idea if it was right or not. And this is very similar to the way that pro sports is always operated by I. Does this person look like a basketball player? Does this person look like they're going to be good? And then we also have tried to pay attention to what Google has told us officially and unofficially. I've given whole presentations on how I think we should interpret what Google has said and what that means. And I'm sure some folks are probably a little skeptical, just like Chuck is about analytics in basketball. I love his quotes. Analytics don't work at all. It's just crap. People who are really smart made up to try to get in the game because they have no talent, which I mean, fair play, I'll take it. But thanks, Chuck. Maybe maybe I have no talent. Let's look at some analytics and let's judge for ourselves how easy this is to do by I. Because you might think you can just go in your GA, look before and after the change and see if it was a good idea or not. I don't think it works that way. So this is some real analytics data with a change deployed on the date with the green arrow that we think is a good idea. And unfortunately, traffic kind of immediately drops. So we see the following week, week over week, traffic is down below past weeks. So maybe this was a bad idea. Maybe it was just a terrible hypothesis. But what if I told you that actually there was a Google algorithm update here in between the change and the measurement? So that could have had something to do with it. And admittedly, we could have seen this one coming. This was Thanksgiving. So B2B, the industry here, not entirely surprising perhaps that organic search traffic is down on Thanksgiving. So I think people have better things to do. Nonetheless, you could try and debug this. You could try and look at year over year data. You can try and segment it. But fundamentally, there is just a lot of confounding data in the mix here. And it is very, very hard, especially if we're talking about marginal impacts. It's very, very hard to know if something was a good idea or not. So let's just zoom out of this a little bit. So this is the same chart, just smaller, with the same lines. So change deployed here, algorithm update, Thanksgiving, and let's roll this forward. It's a little hard to tell, but if I put the lines on, you can see that a month or two later, traffic is strongly higher, strongly up. I know you want a punchline at this point, but I'm just confused monkey face. Is this a good idea? I don't know. Maybe, I hope so, because it was actually search pilot's website. I'll talk about that in a second. I don't know. Maybe other things we did caused traffic to go up over those next couple of months. We published other content during that time. We got some other links. As I said, there were Google algorithm updates. Who knows, all these things confound the data. Anyway, so I mentioned it was our site. Sometimes you just have to shoot the mid-range, right? Not all of us can play in the NBA. I can't dunk anymore. I never could shoot the three. I kind of have to live in the mid-range. And it's the same in SEO. I'm kind of arguing that if you have a large scalable website and the capability to run these kind of tests, that's the future. But not everybody's in that situation. And in fact, search pilot ourselves, ironically, we're not in our own target market, running a relatively small B2B SaaS website with hundreds of pages, not tens of thousands. And so this particular change was one I personally made because I thought it was a good idea. I still think it's a good idea, but I don't know. It's very hard to tell. And this is a good moment to, in the intro, Rob was talking about shared history and we go back a long way. And I spent 15 years running an agency. And at Distilled, I lost track of the times that we would have conversations along the lines of, we've done a ton of work. Can we unpick which bits are valuable, which bits aren't? Was this particular initiative a good idea? Was it not? Even unfortunately, in some cases, you have situations where trend line has continued. Before working with us and after working with us, it's not obvious that this has benefit. And maybe it was still good because if you don't do anything, it probably tanks. So maybe staying flat is the best you can hope for, but you never really know. So yeah, as I said, it did this for a long time and it's still involved with the agency. So we spun out search pilot, as Rob mentioned, which is now an independent standalone tech startup. And that's what I spend my time running. The rest of Distilled was acquired by BrainLabs. And so I'm still up for talking agency stuff. But this is probably a good time in the presentation to address the elephant in the room, which is the fact that I run an SEO testing company. So it is not really surprising that maybe I think this is a good idea. But actually I'm gonna argue that that gets the chicken and the egg the wrong way around. Because when we started building search pilot, I didn't want to build another reporting or analysis or diagnostic tool that just told you how things had gone. Not to throw a shed at that, mainly because the SEO industry is full of excellent tools that do a great job of this already. And that wasn't the market I wanted to play in. We wanted to be like an unfair edge for our customers. And so it's not that I think SEO testing is the future because that's my job. We built this company. This is the company I wanted to run because this is what I think the future of SEO looks like. So let's just recap a minute. So what does this mean for SEO? Well, I think it means different things at different levels in the organizational hierarchy. For very senior executives, I think it's probably just about saying, you know, this is possible. It is possible to connect in individual on-site SEO initiatives to revenue and company performance. If you're in that kind of space, it's necessary. You know, if you're running large websites, you're competing in super competitive industries and your competitors are gonna do this stuff. And it's hard, you know, so you're gonna have to invest in it. You're gonna have to put budget into it. For the marketing leadership, I think it's about the strategy. It's about saying, don't settle for the mid-range. Get to the rim. Your strategy should be comprised new stuff, building new pages, new content, new site sections and the winning tests. And then once you've got that, you can kind of say, okay, this particular project resulted in this uplift. We can tie this to these business outcomes and put actual dollar values on things. And then you can drive the cadence, so you can go faster and I'll talk more about that in a moment. I've written in a bit more detail about both these things. So if you want more about what the C-suite needs to know or what marketing leadership needs to know, there's a blog post for each of those. There's links in the deck and I think by now I've sent out a tweet while I'm on stage that has a link to a blog post that has links to all this stuff and also a place where you can register and I'll email you the slides and you can get all the links and all the rest of it. So that's executives, marketing leadership. Let's talk tactics for a minute. So a lot of folks in the room are the actual practitioners, right? Your job to do the thing, to have the ideas of what to do. So let's dive into some stuff we've learned from at this stage, hundreds of SEO tests across many, many websites. Number one, JavaScript execution is improving, but Google's execution of JavaScript is not perfect. We have seen all of the different outcomes in different tests. So we've seen cases where JavaScript dependency was a significant negative, significant drag on organic search performance and removing that dependency caused an uplift in outcomes. Unfortunately, we've seen cases where a ton of investment into a sensible seeming change to pre-render some content that was rendered only in client-side JavaScript was there was no detectable benefit of that very expensive change to make and to maintain because that JavaScript was actually being executed fine already and working fine on the website. And we've even seen it be positive. So we've seen it in the case where you can improve user experience and drive better performance on the website. In those rare cases, it can even be a positive thing. And the big lesson to take away from this, I think, as well is that you can't run these SEO tests in JavaScript because if you do, every single test is testing both your SEO hypothesis and whether that JavaScript is executing correctly. So that was lesson one. Lesson two is that the user experience stuff is definitely real, at least some of it. So we've run tests like, Google says intrusive interstitials can harm search visibility. And we've got tests where we put an institution in place that a product team desperately wanted and saw a significant drop in traffic. They can now do this or not, as they wish, but at least they know the revenue impact of doing it and they can weigh those things up. And they can maybe even look for ways to get the benefit, the conversion rate benefit, alongside mitigating those SEO impacts. Third lesson, though, so that was listen to Google because they say interstitials hurt. This one is don't listen to Google because Google repeatedly tells us that in a mobile-first indexing world it is totally legitimate on a mobile device to have a user experience where certain bits of content you have to interact with the page to see it, right? Accordions, content behind tabs, those kind of things. And they keep telling us that this is not a problem. They're fine with it. They're totally fine with it. Except that every time we test bringing it out from those things, bringing the content out of the accordions from behind the tabs, visible on page load just on a scroll, performance improves. They've just said it again recently. I think it was like last week or the week before so we're gonna have to test it again. But follow me on Twitter and I'll update you but I think we're gonna see the same thing because that's what we've seen every other time. Lesson four, the stuff that you see on the search results page is critically important. And this shouldn't be a surprise to us because everybody who's got paid search colleagues knows how much of the job over the years has been writing better adverts, getting a better click-through rate, getting more people, same number of searches, more clicks. And yet this has been criminally under-invested by me as much as anyone else in the SEO world. In part because we didn't have the tools to understand whether we were doing, you need the measurement to know if these things are working or not. And so we've seen this with titles, meta descriptions and as I was talking in the second structured data as well. This particular change was a bad one but we've seen big positive impacts, big negative impacts as well. Structured data can be very powerful but a little bit of a prisoner's dilemma. It kind of mainly works when you're different to your competitors. And if you're successful and different to your competitors they're just gonna copy you. So it's very hard to have a sustainable impact with structured data but you kinda need to do it because if you don't they'll get the advantage over you. So I love this one instantly. So this structured data has like stars, ratings, reviews, prices and FAQ. Pretty wild. We don't work with Kayak, that wasn't my doing but still pretty cool. Anyway so structured data can definitely move the needle. I was supposed to be following a talk from Cyrus all about internal linking and the power of internal links. I'm not gonna give his whole presentation in one slide but I have given a presentation on how to run internal linking tests which again you can follow the link and get punchline internal links good. That was Cyrus' talk in like four words. Sorry, sorry. Content quality, lesson seven. Content quality needs measurement. We've seen, we've definitely seen cases, I mentioned earlier the case of going from boilerplate content to individually written unique content per page and seeing a big benefit but we've also seen cases where really terrible content is adding value and removing it was a net negative. So yeah, content quality is a big deal but very hard to predict in advance. EAT is the thing, hopefully that's not too controversial. We've seen some tests where we can actually measure individual changes and attribute those and the one that, I include this just because a lot of these things you might just be nodding along and think yeah, yeah, like obvious. This one, we've had a couple of tests and every time it's worked and I don't think it should which is just moving HTML around but keeping the user experience exactly the same. So changing the CSS so that the page looks exactly the same before and after but moving the main body content up in the HTML above things like the navigation and the sidebar and those kinds of things. I don't think that should work in the third decade of the 21st century but hey, here we are. So if you want more of this, searchpilot.com slash newsletter, you can sign up and get all our case studies sent to you as we publish these test results every couple of weeks. We've got a few minutes left so we've got one other kind of bonus insight here. And so if you go to the casino and you play a game of chance like roulette once, you very well might win, could happen. Continuing the basketball theme, a friend of mine was in Vegas, walked in, bet on Red 23, you know, obviously and won like $750 on his foot, just walked into the casino but then proceeded to lose it over the rest of the weekend because while you might win once, if you play a thousand times, the casino is definitely going to win. When you have an edge, you want more goes. The casino wants you to play more and if you have an edge, you want more iterations. You want to go faster and these folks spotted that as well. So over this same time period where the game changed like this, the cadence of the game, speed of the game changed like this. In the last 20 years, possessions per game is up almost 10% because if you have the edge, you want to play faster, you want to play more. We are the casino in this situation. We have the edge. So we want to go faster. We want to test more things. That's a fairly surface level insight but there is academic research that shows why actually this is even more true when we're talking about web testing. This is an academic paper written by folks at Bing and it's really interesting if you're into this kind of thing. I encourage you to read it. It's reasonably accessible. It gets a bit technical in places but definitely worth a read. I'm just going to let you digest that for a second. No, I'm kidding. We're not reading that. We're going to summarize it into plain English. This stuff means it matters in order to determine the optimal strategy. It matters whether you can predict winners in advance and it matters whether they are typically frequent and small or more often rare but large. This bit means if they're small and frequent, what you want to do is, and with a caveat that is somewhat predictable, then what you want to do is pre-filter those tests, run the most promising ones, run them for longer and at the margin optimize for statistical certainty. Try and really be sure. Remember, this is very fine margins. You want to really be sure that you're picking out the winners. But actually, if it's the other way around, so if wins are large and rarer and relatively hard to predict in advance, then it's better to run more tests faster with less pre-filtering. So spend less time trying to decide if it's a good idea or not, just run the test. Even at the expense of statistical confidence and power, because what you're trying to do here, we have a search bar that we're doing business, not science. We're not trying to get it down to the nth decimal place. We're trying to say let's not roll out the really terrible stuff and let's quickly identify the wins and especially let's quickly identify the big wins and get them out the door sooner before the competition and let's iterate faster, let's run more of these tests. And the kind of final punchline from the Bing folk is that what they found was that website experiments tend to be the latter kind. They tend to be the relatively rare, relatively large and relatively less predictable in advance, whether they're going to be a good idea or not. So all of that points to the fact that testing cadence is like the top of the maturity curve. If we go up from, first you need the ability to run SEO tests, then you want to connect that to revenue and business performance and then right at the top is cadence as core key performance indicator. And this kind of overlays with what I was talking earlier about the different levels of the team, right? So the team, marketing team, the practitioners are thinking of test ideas, running those tests and analyzing the data. The leadership is connecting it, looking at the data connected to revenue. And the senior executives are saying, are we running these tests fast enough? Do we have the investment to drive performance here? And this looks very similar to other things that you might have seen presented at Molescon if anybody was here a few years ago. Heather Fizioch, I don't think he's here today, presented an excellent maturity framework which you can overlay some of this stuff on. And so this is what we're doing. We're celebrating this kind of thing. This was a screenshot from our Slack of a customer of ours that we're celebrating having run 27 tests in just a couple of months, nine were positive, five were double-digit winners, those relatively rare, outsize, large results. So let's do this whole presentation in three slides, just to wrap up. We should be running our strategy, should be winning tests and new content. And then go faster. But we can do even better than that. Let's do it in one side. So this is the side that if you're going to tweet anything, I'm just going to leave it up there for a minute. And as I said, I've tweaked the link. You can retweet my link to my post and all the rest of it. Modern onsite SEO, we should be cutting out those untested changes, the deceptively attractive mid-range. Focus on building out the new content, the new pages, the new site sections, and deploying winning tests. So I'm going to leave you with just a couple of things to actually do. One, I already mentioned, searchpilot.com slash newsletter. This applies to everyone. So we're trying to build here the resource that I wish it existed when I was running an agency, that our consultants would have got value from. We're trying to publish those results even if you can't test yourselves. So if you're in a B2B industry, you're an agency working with smaller local customers, any of those kind of areas where you can't run your own SEO tests, the next best thing, even though it's imperfect in all the ways I've described, the next best thing is to learn from other tests. And this is, in my opinion, the best resource for getting those. And there's probably like a couple of dozen of you in the room. Who are in the NBA? Who are working on those massive websites with the huge budgets and the super competitive industries head-to-head with folks who are doing this kind of testing. And I really want to speak to you, obviously. So you probably just come on your own or with one colleague or two colleagues. You've probably got a bigger team back at the office. I will happily give versions of this talk or the lessons we've learned from hundreds of SEO tests for free. There's a form on the website. You can do all of that. It's been an absolute blast talking to you all today, ModCon. I'm going to just go out with... Well, this. Excellent place to cut that. Thank you.